00:00:00.001 Started by upstream project "autotest-per-patch" build number 132349 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.142 Using shallow fetch with depth 1 00:00:00.142 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.142 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.031 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.044 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.055 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.056 > git config core.sparsecheckout # timeout=10 00:00:04.069 > git read-tree -mu HEAD # timeout=10 00:00:04.084 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.110 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.111 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.201 [Pipeline] Start of Pipeline 00:00:04.213 [Pipeline] library 00:00:04.215 Loading library shm_lib@master 00:00:04.215 Library shm_lib@master is cached. Copying from home. 00:00:04.230 [Pipeline] node 00:00:04.242 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.244 [Pipeline] { 00:00:04.252 [Pipeline] catchError 00:00:04.253 [Pipeline] { 00:00:04.263 [Pipeline] wrap 00:00:04.270 [Pipeline] { 00:00:04.277 [Pipeline] stage 00:00:04.280 [Pipeline] { (Prologue) 00:00:04.300 [Pipeline] echo 00:00:04.302 Node: VM-host-WFP7 00:00:04.308 [Pipeline] cleanWs 00:00:04.318 [WS-CLEANUP] Deleting project workspace... 00:00:04.318 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.325 [WS-CLEANUP] done 00:00:04.521 [Pipeline] setCustomBuildProperty 00:00:04.587 [Pipeline] httpRequest 00:00:04.900 [Pipeline] echo 00:00:04.901 Sorcerer 10.211.164.20 is alive 00:00:04.908 [Pipeline] retry 00:00:04.909 [Pipeline] { 00:00:04.918 [Pipeline] httpRequest 00:00:04.923 HttpMethod: GET 00:00:04.923 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.924 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.939 Response Code: HTTP/1.1 200 OK 00:00:04.939 Success: Status code 200 is in the accepted range: 200,404 00:00:04.940 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.880 [Pipeline] } 00:00:06.895 [Pipeline] // retry 00:00:06.902 [Pipeline] sh 00:00:07.188 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.202 [Pipeline] httpRequest 00:00:07.581 [Pipeline] echo 00:00:07.583 Sorcerer 10.211.164.20 is alive 00:00:07.590 [Pipeline] retry 00:00:07.594 [Pipeline] { 00:00:07.613 [Pipeline] httpRequest 00:00:07.617 HttpMethod: GET 00:00:07.618 URL: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:07.619 Sending request to url: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:07.631 Response Code: HTTP/1.1 200 OK 00:00:07.632 Success: Status code 200 is in the accepted range: 200,404 00:00:07.632 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:03:22.233 [Pipeline] } 00:03:22.249 [Pipeline] // retry 00:03:22.255 [Pipeline] sh 00:03:22.532 + tar --no-same-owner -xf spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:03:25.073 [Pipeline] sh 00:03:25.349 + git -C spdk log --oneline -n5 00:03:25.349 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:03:25.349 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:03:25.349 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:03:25.349 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:03:25.349 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:03:25.365 [Pipeline] writeFile 00:03:25.378 [Pipeline] sh 00:03:25.652 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:25.662 [Pipeline] sh 00:03:25.940 + cat autorun-spdk.conf 00:03:25.940 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:25.940 SPDK_RUN_ASAN=1 00:03:25.940 SPDK_RUN_UBSAN=1 00:03:25.940 SPDK_TEST_RAID=1 00:03:25.940 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:25.945 RUN_NIGHTLY=0 00:03:25.947 [Pipeline] } 00:03:25.961 [Pipeline] // stage 00:03:25.977 [Pipeline] stage 00:03:25.979 [Pipeline] { (Run VM) 00:03:25.991 [Pipeline] sh 00:03:26.282 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:26.282 + echo 'Start stage prepare_nvme.sh' 00:03:26.282 Start stage prepare_nvme.sh 00:03:26.282 + [[ -n 1 ]] 00:03:26.282 + disk_prefix=ex1 00:03:26.282 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:03:26.282 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:03:26.282 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:03:26.282 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.282 ++ SPDK_RUN_ASAN=1 00:03:26.282 ++ SPDK_RUN_UBSAN=1 00:03:26.282 ++ SPDK_TEST_RAID=1 00:03:26.282 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.282 ++ RUN_NIGHTLY=0 00:03:26.282 + cd /var/jenkins/workspace/raid-vg-autotest 00:03:26.282 + nvme_files=() 00:03:26.282 + declare -A nvme_files 00:03:26.282 + backend_dir=/var/lib/libvirt/images/backends 00:03:26.282 + nvme_files['nvme.img']=5G 00:03:26.282 + nvme_files['nvme-cmb.img']=5G 00:03:26.282 + nvme_files['nvme-multi0.img']=4G 00:03:26.282 + nvme_files['nvme-multi1.img']=4G 00:03:26.282 + nvme_files['nvme-multi2.img']=4G 00:03:26.282 + nvme_files['nvme-openstack.img']=8G 00:03:26.282 + nvme_files['nvme-zns.img']=5G 00:03:26.282 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:26.282 + (( SPDK_TEST_FTL == 1 )) 00:03:26.282 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:26.282 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:26.282 + for nvme in "${!nvme_files[@]}" 00:03:26.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:03:26.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:26.282 + for nvme in "${!nvme_files[@]}" 00:03:26.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:03:26.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:26.282 + for nvme in "${!nvme_files[@]}" 00:03:26.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:03:26.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:26.282 + for nvme in "${!nvme_files[@]}" 00:03:26.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:03:26.283 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:26.283 + for nvme in "${!nvme_files[@]}" 00:03:26.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:03:26.283 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:26.283 + for nvme in "${!nvme_files[@]}" 00:03:26.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:03:26.283 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:26.283 + for nvme in "${!nvme_files[@]}" 00:03:26.283 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:03:26.283 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:26.542 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:03:26.542 + echo 'End stage prepare_nvme.sh' 00:03:26.542 End stage prepare_nvme.sh 00:03:26.553 [Pipeline] sh 00:03:26.834 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:26.834 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:03:26.834 00:03:26.834 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:03:26.834 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:03:26.834 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:03:26.834 HELP=0 00:03:26.834 DRY_RUN=0 00:03:26.834 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:03:26.834 NVME_DISKS_TYPE=nvme,nvme, 00:03:26.834 NVME_AUTO_CREATE=0 00:03:26.834 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:03:26.834 NVME_CMB=,, 00:03:26.834 NVME_PMR=,, 00:03:26.834 NVME_ZNS=,, 00:03:26.834 NVME_MS=,, 00:03:26.834 NVME_FDP=,, 00:03:26.834 SPDK_VAGRANT_DISTRO=fedora39 00:03:26.834 SPDK_VAGRANT_VMCPU=10 00:03:26.834 SPDK_VAGRANT_VMRAM=12288 00:03:26.834 SPDK_VAGRANT_PROVIDER=libvirt 00:03:26.834 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:26.834 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:26.834 SPDK_OPENSTACK_NETWORK=0 00:03:26.834 VAGRANT_PACKAGE_BOX=0 00:03:26.834 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:26.834 FORCE_DISTRO=true 00:03:26.834 VAGRANT_BOX_VERSION= 00:03:26.834 EXTRA_VAGRANTFILES= 00:03:26.834 NIC_MODEL=virtio 00:03:26.834 00:03:26.834 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:03:26.834 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:29.365 Bringing machine 'default' up with 'libvirt' provider... 00:03:29.625 ==> default: Creating image (snapshot of base box volume). 00:03:29.625 ==> default: Creating domain with the following settings... 00:03:29.625 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732086011_29c40ce3c81506ff21f3 00:03:29.625 ==> default: -- Domain type: kvm 00:03:29.625 ==> default: -- Cpus: 10 00:03:29.625 ==> default: -- Feature: acpi 00:03:29.625 ==> default: -- Feature: apic 00:03:29.625 ==> default: -- Feature: pae 00:03:29.625 ==> default: -- Memory: 12288M 00:03:29.625 ==> default: -- Memory Backing: hugepages: 00:03:29.625 ==> default: -- Management MAC: 00:03:29.625 ==> default: -- Loader: 00:03:29.625 ==> default: -- Nvram: 00:03:29.625 ==> default: -- Base box: spdk/fedora39 00:03:29.625 ==> default: -- Storage pool: default 00:03:29.625 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732086011_29c40ce3c81506ff21f3.img (20G) 00:03:29.625 ==> default: -- Volume Cache: default 00:03:29.625 ==> default: -- Kernel: 00:03:29.625 ==> default: -- Initrd: 00:03:29.625 ==> default: -- Graphics Type: vnc 00:03:29.625 ==> default: -- Graphics Port: -1 00:03:29.625 ==> default: -- Graphics IP: 127.0.0.1 00:03:29.625 ==> default: -- Graphics Password: Not defined 00:03:29.625 ==> default: -- Video Type: cirrus 00:03:29.625 ==> default: -- Video VRAM: 9216 00:03:29.625 ==> default: -- Sound Type: 00:03:29.625 ==> default: -- Keymap: en-us 00:03:29.625 ==> default: -- TPM Path: 00:03:29.625 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:29.625 ==> default: -- Command line args: 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:29.625 ==> default: -> value=-drive, 00:03:29.625 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:29.625 ==> default: -> value=-drive, 00:03:29.625 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:29.625 ==> default: -> value=-drive, 00:03:29.625 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:29.625 ==> default: -> value=-drive, 00:03:29.625 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:29.625 ==> default: -> value=-device, 00:03:29.625 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:29.885 ==> default: Creating shared folders metadata... 00:03:29.885 ==> default: Starting domain. 00:03:31.264 ==> default: Waiting for domain to get an IP address... 00:03:49.355 ==> default: Waiting for SSH to become available... 00:03:49.355 ==> default: Configuring and enabling network interfaces... 00:03:54.629 default: SSH address: 192.168.121.178:22 00:03:54.629 default: SSH username: vagrant 00:03:54.629 default: SSH auth method: private key 00:03:57.166 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:05.287 ==> default: Mounting SSHFS shared folder... 00:04:07.211 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:07.211 ==> default: Checking Mount.. 00:04:08.591 ==> default: Folder Successfully Mounted! 00:04:08.591 ==> default: Running provisioner: file... 00:04:09.527 default: ~/.gitconfig => .gitconfig 00:04:10.095 00:04:10.095 SUCCESS! 00:04:10.095 00:04:10.095 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:10.095 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:10.095 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:10.095 00:04:10.104 [Pipeline] } 00:04:10.120 [Pipeline] // stage 00:04:10.129 [Pipeline] dir 00:04:10.130 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:04:10.132 [Pipeline] { 00:04:10.145 [Pipeline] catchError 00:04:10.146 [Pipeline] { 00:04:10.159 [Pipeline] sh 00:04:10.441 + vagrant ssh-config --host vagrant 00:04:10.441 + sed -ne /^Host/,$p 00:04:10.441 + tee ssh_conf 00:04:13.731 Host vagrant 00:04:13.731 HostName 192.168.121.178 00:04:13.731 User vagrant 00:04:13.731 Port 22 00:04:13.731 UserKnownHostsFile /dev/null 00:04:13.731 StrictHostKeyChecking no 00:04:13.731 PasswordAuthentication no 00:04:13.731 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:13.731 IdentitiesOnly yes 00:04:13.731 LogLevel FATAL 00:04:13.731 ForwardAgent yes 00:04:13.731 ForwardX11 yes 00:04:13.731 00:04:13.745 [Pipeline] withEnv 00:04:13.747 [Pipeline] { 00:04:13.761 [Pipeline] sh 00:04:14.042 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:14.042 source /etc/os-release 00:04:14.042 [[ -e /image.version ]] && img=$(< /image.version) 00:04:14.042 # Minimal, systemd-like check. 00:04:14.042 if [[ -e /.dockerenv ]]; then 00:04:14.042 # Clear garbage from the node's name: 00:04:14.042 # agt-er_autotest_547-896 -> autotest_547-896 00:04:14.042 # $HOSTNAME is the actual container id 00:04:14.042 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:14.042 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:14.042 # We can assume this is a mount from a host where container is running, 00:04:14.042 # so fetch its hostname to easily identify the target swarm worker. 00:04:14.042 container="$(< /etc/hostname) ($agent)" 00:04:14.042 else 00:04:14.042 # Fallback 00:04:14.042 container=$agent 00:04:14.042 fi 00:04:14.042 fi 00:04:14.042 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:14.042 00:04:14.315 [Pipeline] } 00:04:14.332 [Pipeline] // withEnv 00:04:14.340 [Pipeline] setCustomBuildProperty 00:04:14.357 [Pipeline] stage 00:04:14.360 [Pipeline] { (Tests) 00:04:14.379 [Pipeline] sh 00:04:14.662 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:14.934 [Pipeline] sh 00:04:15.216 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:15.489 [Pipeline] timeout 00:04:15.490 Timeout set to expire in 1 hr 30 min 00:04:15.491 [Pipeline] { 00:04:15.505 [Pipeline] sh 00:04:15.788 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:16.356 HEAD is now at 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:04:16.367 [Pipeline] sh 00:04:16.650 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:16.921 [Pipeline] sh 00:04:17.199 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:17.470 [Pipeline] sh 00:04:17.751 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:18.010 ++ readlink -f spdk_repo 00:04:18.010 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:18.010 + [[ -n /home/vagrant/spdk_repo ]] 00:04:18.010 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:18.010 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:18.010 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:18.010 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:18.010 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:18.010 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:18.010 + cd /home/vagrant/spdk_repo 00:04:18.010 + source /etc/os-release 00:04:18.010 ++ NAME='Fedora Linux' 00:04:18.010 ++ VERSION='39 (Cloud Edition)' 00:04:18.010 ++ ID=fedora 00:04:18.010 ++ VERSION_ID=39 00:04:18.010 ++ VERSION_CODENAME= 00:04:18.010 ++ PLATFORM_ID=platform:f39 00:04:18.010 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:18.010 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:18.010 ++ LOGO=fedora-logo-icon 00:04:18.010 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:18.010 ++ HOME_URL=https://fedoraproject.org/ 00:04:18.010 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:18.010 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:18.010 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:18.010 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:18.010 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:18.010 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:18.010 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:18.010 ++ SUPPORT_END=2024-11-12 00:04:18.010 ++ VARIANT='Cloud Edition' 00:04:18.010 ++ VARIANT_ID=cloud 00:04:18.010 + uname -a 00:04:18.010 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:18.010 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.579 Hugepages 00:04:18.579 node hugesize free / total 00:04:18.579 node0 1048576kB 0 / 0 00:04:18.579 node0 2048kB 0 / 0 00:04:18.579 00:04:18.579 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.579 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.579 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.579 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:18.579 + rm -f /tmp/spdk-ld-path 00:04:18.579 + source autorun-spdk.conf 00:04:18.579 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:18.579 ++ SPDK_RUN_ASAN=1 00:04:18.579 ++ SPDK_RUN_UBSAN=1 00:04:18.579 ++ SPDK_TEST_RAID=1 00:04:18.579 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:18.579 ++ RUN_NIGHTLY=0 00:04:18.579 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:18.579 + [[ -n '' ]] 00:04:18.579 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:18.579 + for M in /var/spdk/build-*-manifest.txt 00:04:18.579 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:18.579 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:18.579 + for M in /var/spdk/build-*-manifest.txt 00:04:18.579 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:18.579 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:18.579 + for M in /var/spdk/build-*-manifest.txt 00:04:18.579 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:18.579 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:18.579 ++ uname 00:04:18.579 + [[ Linux == \L\i\n\u\x ]] 00:04:18.579 + sudo dmesg -T 00:04:18.840 + sudo dmesg --clear 00:04:18.840 + dmesg_pid=5430 00:04:18.840 + [[ Fedora Linux == FreeBSD ]] 00:04:18.840 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:18.840 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:18.840 + sudo dmesg -Tw 00:04:18.840 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:18.840 + [[ -x /usr/src/fio-static/fio ]] 00:04:18.840 + export FIO_BIN=/usr/src/fio-static/fio 00:04:18.840 + FIO_BIN=/usr/src/fio-static/fio 00:04:18.840 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:18.840 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:18.840 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:18.840 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:18.840 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:18.840 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:18.840 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:18.840 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:18.840 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:18.840 07:01:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:18.840 07:01:01 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:18.840 07:01:01 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:18.840 07:01:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:18.840 07:01:01 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:19.099 07:01:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:19.099 07:01:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.099 07:01:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:19.099 07:01:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:19.099 07:01:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.099 07:01:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.099 07:01:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.100 07:01:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.100 07:01:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.100 07:01:01 -- paths/export.sh@5 -- $ export PATH 00:04:19.100 07:01:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.100 07:01:01 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:19.100 07:01:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:19.100 07:01:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086061.XXXXXX 00:04:19.100 07:01:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086061.ZymCPl 00:04:19.100 07:01:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:19.100 07:01:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:19.100 07:01:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:19.100 07:01:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:19.100 07:01:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:19.100 07:01:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:19.100 07:01:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:19.100 07:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:04:19.100 07:01:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:19.100 07:01:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:19.100 07:01:01 -- pm/common@17 -- $ local monitor 00:04:19.100 07:01:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.100 07:01:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.100 07:01:01 -- pm/common@25 -- $ sleep 1 00:04:19.100 07:01:01 -- pm/common@21 -- $ date +%s 00:04:19.100 07:01:01 -- pm/common@21 -- $ date +%s 00:04:19.100 07:01:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086061 00:04:19.100 07:01:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086061 00:04:19.100 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086061_collect-cpu-load.pm.log 00:04:19.100 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086061_collect-vmstat.pm.log 00:04:20.037 07:01:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:20.037 07:01:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:20.037 07:01:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:20.037 07:01:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:20.037 07:01:02 -- spdk/autobuild.sh@16 -- $ date -u 00:04:20.037 Wed Nov 20 07:01:02 AM UTC 2024 00:04:20.037 07:01:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:20.037 v25.01-pre-200-g6fc96a60f 00:04:20.037 07:01:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:20.038 07:01:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:20.038 07:01:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:20.038 07:01:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:20.038 07:01:02 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.038 ************************************ 00:04:20.038 START TEST asan 00:04:20.038 ************************************ 00:04:20.038 using asan 00:04:20.038 07:01:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:20.038 00:04:20.038 real 0m0.001s 00:04:20.038 user 0m0.000s 00:04:20.038 sys 0m0.000s 00:04:20.038 07:01:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:20.038 07:01:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:20.038 ************************************ 00:04:20.038 END TEST asan 00:04:20.038 ************************************ 00:04:20.038 07:01:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:20.038 07:01:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:20.038 07:01:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:20.038 07:01:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:20.038 07:01:02 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.038 ************************************ 00:04:20.038 START TEST ubsan 00:04:20.038 ************************************ 00:04:20.038 using ubsan 00:04:20.038 07:01:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:20.038 00:04:20.038 real 0m0.000s 00:04:20.038 user 0m0.000s 00:04:20.038 sys 0m0.000s 00:04:20.038 07:01:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:20.038 07:01:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:20.038 ************************************ 00:04:20.038 END TEST ubsan 00:04:20.038 ************************************ 00:04:20.298 07:01:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:20.298 07:01:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:20.298 07:01:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:20.298 07:01:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:20.298 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:20.298 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:20.865 Using 'verbs' RDMA provider 00:04:36.736 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:54.834 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:54.834 Creating mk/config.mk...done. 00:04:54.834 Creating mk/cc.flags.mk...done. 00:04:54.834 Type 'make' to build. 00:04:54.834 07:01:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:54.834 07:01:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:54.834 07:01:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:54.834 07:01:34 -- common/autotest_common.sh@10 -- $ set +x 00:04:54.834 ************************************ 00:04:54.834 START TEST make 00:04:54.834 ************************************ 00:04:54.834 07:01:34 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:54.834 make[1]: Nothing to be done for 'all'. 00:05:04.833 The Meson build system 00:05:04.833 Version: 1.5.0 00:05:04.833 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:04.833 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:04.833 Build type: native build 00:05:04.833 Program cat found: YES (/usr/bin/cat) 00:05:04.833 Project name: DPDK 00:05:04.833 Project version: 24.03.0 00:05:04.833 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:04.833 C linker for the host machine: cc ld.bfd 2.40-14 00:05:04.833 Host machine cpu family: x86_64 00:05:04.833 Host machine cpu: x86_64 00:05:04.833 Message: ## Building in Developer Mode ## 00:05:04.833 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:04.833 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:04.833 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:04.833 Program python3 found: YES (/usr/bin/python3) 00:05:04.833 Program cat found: YES (/usr/bin/cat) 00:05:04.833 Compiler for C supports arguments -march=native: YES 00:05:04.833 Checking for size of "void *" : 8 00:05:04.833 Checking for size of "void *" : 8 (cached) 00:05:04.833 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:04.833 Library m found: YES 00:05:04.833 Library numa found: YES 00:05:04.833 Has header "numaif.h" : YES 00:05:04.833 Library fdt found: NO 00:05:04.833 Library execinfo found: NO 00:05:04.833 Has header "execinfo.h" : YES 00:05:04.833 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:04.833 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:04.833 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:04.833 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:04.833 Run-time dependency openssl found: YES 3.1.1 00:05:04.833 Run-time dependency libpcap found: YES 1.10.4 00:05:04.833 Has header "pcap.h" with dependency libpcap: YES 00:05:04.833 Compiler for C supports arguments -Wcast-qual: YES 00:05:04.833 Compiler for C supports arguments -Wdeprecated: YES 00:05:04.833 Compiler for C supports arguments -Wformat: YES 00:05:04.833 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:04.833 Compiler for C supports arguments -Wformat-security: NO 00:05:04.833 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:04.833 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:04.833 Compiler for C supports arguments -Wnested-externs: YES 00:05:04.833 Compiler for C supports arguments -Wold-style-definition: YES 00:05:04.833 Compiler for C supports arguments -Wpointer-arith: YES 00:05:04.833 Compiler for C supports arguments -Wsign-compare: YES 00:05:04.833 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:04.833 Compiler for C supports arguments -Wundef: YES 00:05:04.833 Compiler for C supports arguments -Wwrite-strings: YES 00:05:04.833 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:04.833 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:04.833 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:04.833 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:04.833 Program objdump found: YES (/usr/bin/objdump) 00:05:04.833 Compiler for C supports arguments -mavx512f: YES 00:05:04.833 Checking if "AVX512 checking" compiles: YES 00:05:04.833 Fetching value of define "__SSE4_2__" : 1 00:05:04.833 Fetching value of define "__AES__" : 1 00:05:04.833 Fetching value of define "__AVX__" : 1 00:05:04.833 Fetching value of define "__AVX2__" : 1 00:05:04.833 Fetching value of define "__AVX512BW__" : 1 00:05:04.833 Fetching value of define "__AVX512CD__" : 1 00:05:04.833 Fetching value of define "__AVX512DQ__" : 1 00:05:04.833 Fetching value of define "__AVX512F__" : 1 00:05:04.833 Fetching value of define "__AVX512VL__" : 1 00:05:04.833 Fetching value of define "__PCLMUL__" : 1 00:05:04.833 Fetching value of define "__RDRND__" : 1 00:05:04.833 Fetching value of define "__RDSEED__" : 1 00:05:04.833 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:04.833 Fetching value of define "__znver1__" : (undefined) 00:05:04.833 Fetching value of define "__znver2__" : (undefined) 00:05:04.833 Fetching value of define "__znver3__" : (undefined) 00:05:04.833 Fetching value of define "__znver4__" : (undefined) 00:05:04.833 Library asan found: YES 00:05:04.833 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:04.833 Message: lib/log: Defining dependency "log" 00:05:04.833 Message: lib/kvargs: Defining dependency "kvargs" 00:05:04.833 Message: lib/telemetry: Defining dependency "telemetry" 00:05:04.833 Library rt found: YES 00:05:04.833 Checking for function "getentropy" : NO 00:05:04.833 Message: lib/eal: Defining dependency "eal" 00:05:04.833 Message: lib/ring: Defining dependency "ring" 00:05:04.833 Message: lib/rcu: Defining dependency "rcu" 00:05:04.833 Message: lib/mempool: Defining dependency "mempool" 00:05:04.833 Message: lib/mbuf: Defining dependency "mbuf" 00:05:04.833 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:04.833 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:04.833 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:04.833 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:04.833 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:04.833 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:04.833 Compiler for C supports arguments -mpclmul: YES 00:05:04.833 Compiler for C supports arguments -maes: YES 00:05:04.833 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:04.833 Compiler for C supports arguments -mavx512bw: YES 00:05:04.833 Compiler for C supports arguments -mavx512dq: YES 00:05:04.833 Compiler for C supports arguments -mavx512vl: YES 00:05:04.833 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:04.833 Compiler for C supports arguments -mavx2: YES 00:05:04.833 Compiler for C supports arguments -mavx: YES 00:05:04.833 Message: lib/net: Defining dependency "net" 00:05:04.833 Message: lib/meter: Defining dependency "meter" 00:05:04.833 Message: lib/ethdev: Defining dependency "ethdev" 00:05:04.833 Message: lib/pci: Defining dependency "pci" 00:05:04.833 Message: lib/cmdline: Defining dependency "cmdline" 00:05:04.833 Message: lib/hash: Defining dependency "hash" 00:05:04.833 Message: lib/timer: Defining dependency "timer" 00:05:04.833 Message: lib/compressdev: Defining dependency "compressdev" 00:05:04.833 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:04.833 Message: lib/dmadev: Defining dependency "dmadev" 00:05:04.833 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:04.833 Message: lib/power: Defining dependency "power" 00:05:04.833 Message: lib/reorder: Defining dependency "reorder" 00:05:04.833 Message: lib/security: Defining dependency "security" 00:05:04.833 Has header "linux/userfaultfd.h" : YES 00:05:04.833 Has header "linux/vduse.h" : YES 00:05:04.833 Message: lib/vhost: Defining dependency "vhost" 00:05:04.833 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:04.833 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:04.833 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:04.833 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:04.833 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:04.833 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:04.833 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:04.833 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:04.833 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:04.833 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:04.833 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:04.833 Configuring doxy-api-html.conf using configuration 00:05:04.833 Configuring doxy-api-man.conf using configuration 00:05:04.833 Program mandb found: YES (/usr/bin/mandb) 00:05:04.833 Program sphinx-build found: NO 00:05:04.833 Configuring rte_build_config.h using configuration 00:05:04.834 Message: 00:05:04.834 ================= 00:05:04.834 Applications Enabled 00:05:04.834 ================= 00:05:04.834 00:05:04.834 apps: 00:05:04.834 00:05:04.834 00:05:04.834 Message: 00:05:04.834 ================= 00:05:04.834 Libraries Enabled 00:05:04.834 ================= 00:05:04.834 00:05:04.834 libs: 00:05:04.834 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:04.834 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:04.834 cryptodev, dmadev, power, reorder, security, vhost, 00:05:04.834 00:05:04.834 Message: 00:05:04.834 =============== 00:05:04.834 Drivers Enabled 00:05:04.834 =============== 00:05:04.834 00:05:04.834 common: 00:05:04.834 00:05:04.834 bus: 00:05:04.834 pci, vdev, 00:05:04.834 mempool: 00:05:04.834 ring, 00:05:04.834 dma: 00:05:04.834 00:05:04.834 net: 00:05:04.834 00:05:04.834 crypto: 00:05:04.834 00:05:04.834 compress: 00:05:04.834 00:05:04.834 vdpa: 00:05:04.834 00:05:04.834 00:05:04.834 Message: 00:05:04.834 ================= 00:05:04.834 Content Skipped 00:05:04.834 ================= 00:05:04.834 00:05:04.834 apps: 00:05:04.834 dumpcap: explicitly disabled via build config 00:05:04.834 graph: explicitly disabled via build config 00:05:04.834 pdump: explicitly disabled via build config 00:05:04.834 proc-info: explicitly disabled via build config 00:05:04.834 test-acl: explicitly disabled via build config 00:05:04.834 test-bbdev: explicitly disabled via build config 00:05:04.834 test-cmdline: explicitly disabled via build config 00:05:04.834 test-compress-perf: explicitly disabled via build config 00:05:04.834 test-crypto-perf: explicitly disabled via build config 00:05:04.834 test-dma-perf: explicitly disabled via build config 00:05:04.834 test-eventdev: explicitly disabled via build config 00:05:04.834 test-fib: explicitly disabled via build config 00:05:04.834 test-flow-perf: explicitly disabled via build config 00:05:04.834 test-gpudev: explicitly disabled via build config 00:05:04.834 test-mldev: explicitly disabled via build config 00:05:04.834 test-pipeline: explicitly disabled via build config 00:05:04.834 test-pmd: explicitly disabled via build config 00:05:04.834 test-regex: explicitly disabled via build config 00:05:04.834 test-sad: explicitly disabled via build config 00:05:04.834 test-security-perf: explicitly disabled via build config 00:05:04.834 00:05:04.834 libs: 00:05:04.834 argparse: explicitly disabled via build config 00:05:04.834 metrics: explicitly disabled via build config 00:05:04.834 acl: explicitly disabled via build config 00:05:04.834 bbdev: explicitly disabled via build config 00:05:04.834 bitratestats: explicitly disabled via build config 00:05:04.834 bpf: explicitly disabled via build config 00:05:04.834 cfgfile: explicitly disabled via build config 00:05:04.834 distributor: explicitly disabled via build config 00:05:04.834 efd: explicitly disabled via build config 00:05:04.834 eventdev: explicitly disabled via build config 00:05:04.834 dispatcher: explicitly disabled via build config 00:05:04.834 gpudev: explicitly disabled via build config 00:05:04.834 gro: explicitly disabled via build config 00:05:04.834 gso: explicitly disabled via build config 00:05:04.834 ip_frag: explicitly disabled via build config 00:05:04.834 jobstats: explicitly disabled via build config 00:05:04.834 latencystats: explicitly disabled via build config 00:05:04.834 lpm: explicitly disabled via build config 00:05:04.834 member: explicitly disabled via build config 00:05:04.834 pcapng: explicitly disabled via build config 00:05:04.834 rawdev: explicitly disabled via build config 00:05:04.834 regexdev: explicitly disabled via build config 00:05:04.834 mldev: explicitly disabled via build config 00:05:04.834 rib: explicitly disabled via build config 00:05:04.834 sched: explicitly disabled via build config 00:05:04.834 stack: explicitly disabled via build config 00:05:04.834 ipsec: explicitly disabled via build config 00:05:04.834 pdcp: explicitly disabled via build config 00:05:04.834 fib: explicitly disabled via build config 00:05:04.834 port: explicitly disabled via build config 00:05:04.834 pdump: explicitly disabled via build config 00:05:04.834 table: explicitly disabled via build config 00:05:04.834 pipeline: explicitly disabled via build config 00:05:04.834 graph: explicitly disabled via build config 00:05:04.834 node: explicitly disabled via build config 00:05:04.834 00:05:04.834 drivers: 00:05:04.834 common/cpt: not in enabled drivers build config 00:05:04.834 common/dpaax: not in enabled drivers build config 00:05:04.834 common/iavf: not in enabled drivers build config 00:05:04.834 common/idpf: not in enabled drivers build config 00:05:04.834 common/ionic: not in enabled drivers build config 00:05:04.834 common/mvep: not in enabled drivers build config 00:05:04.834 common/octeontx: not in enabled drivers build config 00:05:04.834 bus/auxiliary: not in enabled drivers build config 00:05:04.834 bus/cdx: not in enabled drivers build config 00:05:04.834 bus/dpaa: not in enabled drivers build config 00:05:04.834 bus/fslmc: not in enabled drivers build config 00:05:04.834 bus/ifpga: not in enabled drivers build config 00:05:04.834 bus/platform: not in enabled drivers build config 00:05:04.834 bus/uacce: not in enabled drivers build config 00:05:04.834 bus/vmbus: not in enabled drivers build config 00:05:04.834 common/cnxk: not in enabled drivers build config 00:05:04.834 common/mlx5: not in enabled drivers build config 00:05:04.834 common/nfp: not in enabled drivers build config 00:05:04.834 common/nitrox: not in enabled drivers build config 00:05:04.834 common/qat: not in enabled drivers build config 00:05:04.834 common/sfc_efx: not in enabled drivers build config 00:05:04.834 mempool/bucket: not in enabled drivers build config 00:05:04.834 mempool/cnxk: not in enabled drivers build config 00:05:04.834 mempool/dpaa: not in enabled drivers build config 00:05:04.834 mempool/dpaa2: not in enabled drivers build config 00:05:04.834 mempool/octeontx: not in enabled drivers build config 00:05:04.834 mempool/stack: not in enabled drivers build config 00:05:04.834 dma/cnxk: not in enabled drivers build config 00:05:04.834 dma/dpaa: not in enabled drivers build config 00:05:04.834 dma/dpaa2: not in enabled drivers build config 00:05:04.834 dma/hisilicon: not in enabled drivers build config 00:05:04.834 dma/idxd: not in enabled drivers build config 00:05:04.834 dma/ioat: not in enabled drivers build config 00:05:04.834 dma/skeleton: not in enabled drivers build config 00:05:04.834 net/af_packet: not in enabled drivers build config 00:05:04.834 net/af_xdp: not in enabled drivers build config 00:05:04.834 net/ark: not in enabled drivers build config 00:05:04.834 net/atlantic: not in enabled drivers build config 00:05:04.834 net/avp: not in enabled drivers build config 00:05:04.834 net/axgbe: not in enabled drivers build config 00:05:04.834 net/bnx2x: not in enabled drivers build config 00:05:04.834 net/bnxt: not in enabled drivers build config 00:05:04.834 net/bonding: not in enabled drivers build config 00:05:04.834 net/cnxk: not in enabled drivers build config 00:05:04.834 net/cpfl: not in enabled drivers build config 00:05:04.834 net/cxgbe: not in enabled drivers build config 00:05:04.834 net/dpaa: not in enabled drivers build config 00:05:04.834 net/dpaa2: not in enabled drivers build config 00:05:04.834 net/e1000: not in enabled drivers build config 00:05:04.834 net/ena: not in enabled drivers build config 00:05:04.834 net/enetc: not in enabled drivers build config 00:05:04.834 net/enetfec: not in enabled drivers build config 00:05:04.834 net/enic: not in enabled drivers build config 00:05:04.834 net/failsafe: not in enabled drivers build config 00:05:04.834 net/fm10k: not in enabled drivers build config 00:05:04.834 net/gve: not in enabled drivers build config 00:05:04.834 net/hinic: not in enabled drivers build config 00:05:04.834 net/hns3: not in enabled drivers build config 00:05:04.834 net/i40e: not in enabled drivers build config 00:05:04.834 net/iavf: not in enabled drivers build config 00:05:04.834 net/ice: not in enabled drivers build config 00:05:04.834 net/idpf: not in enabled drivers build config 00:05:04.834 net/igc: not in enabled drivers build config 00:05:04.834 net/ionic: not in enabled drivers build config 00:05:04.834 net/ipn3ke: not in enabled drivers build config 00:05:04.834 net/ixgbe: not in enabled drivers build config 00:05:04.834 net/mana: not in enabled drivers build config 00:05:04.834 net/memif: not in enabled drivers build config 00:05:04.834 net/mlx4: not in enabled drivers build config 00:05:04.834 net/mlx5: not in enabled drivers build config 00:05:04.834 net/mvneta: not in enabled drivers build config 00:05:04.834 net/mvpp2: not in enabled drivers build config 00:05:04.834 net/netvsc: not in enabled drivers build config 00:05:04.834 net/nfb: not in enabled drivers build config 00:05:04.834 net/nfp: not in enabled drivers build config 00:05:04.834 net/ngbe: not in enabled drivers build config 00:05:04.834 net/null: not in enabled drivers build config 00:05:04.834 net/octeontx: not in enabled drivers build config 00:05:04.834 net/octeon_ep: not in enabled drivers build config 00:05:04.834 net/pcap: not in enabled drivers build config 00:05:04.834 net/pfe: not in enabled drivers build config 00:05:04.834 net/qede: not in enabled drivers build config 00:05:04.834 net/ring: not in enabled drivers build config 00:05:04.834 net/sfc: not in enabled drivers build config 00:05:04.834 net/softnic: not in enabled drivers build config 00:05:04.834 net/tap: not in enabled drivers build config 00:05:04.834 net/thunderx: not in enabled drivers build config 00:05:04.834 net/txgbe: not in enabled drivers build config 00:05:04.834 net/vdev_netvsc: not in enabled drivers build config 00:05:04.835 net/vhost: not in enabled drivers build config 00:05:04.835 net/virtio: not in enabled drivers build config 00:05:04.835 net/vmxnet3: not in enabled drivers build config 00:05:04.835 raw/*: missing internal dependency, "rawdev" 00:05:04.835 crypto/armv8: not in enabled drivers build config 00:05:04.835 crypto/bcmfs: not in enabled drivers build config 00:05:04.835 crypto/caam_jr: not in enabled drivers build config 00:05:04.835 crypto/ccp: not in enabled drivers build config 00:05:04.835 crypto/cnxk: not in enabled drivers build config 00:05:04.835 crypto/dpaa_sec: not in enabled drivers build config 00:05:04.835 crypto/dpaa2_sec: not in enabled drivers build config 00:05:04.835 crypto/ipsec_mb: not in enabled drivers build config 00:05:04.835 crypto/mlx5: not in enabled drivers build config 00:05:04.835 crypto/mvsam: not in enabled drivers build config 00:05:04.835 crypto/nitrox: not in enabled drivers build config 00:05:04.835 crypto/null: not in enabled drivers build config 00:05:04.835 crypto/octeontx: not in enabled drivers build config 00:05:04.835 crypto/openssl: not in enabled drivers build config 00:05:04.835 crypto/scheduler: not in enabled drivers build config 00:05:04.835 crypto/uadk: not in enabled drivers build config 00:05:04.835 crypto/virtio: not in enabled drivers build config 00:05:04.835 compress/isal: not in enabled drivers build config 00:05:04.835 compress/mlx5: not in enabled drivers build config 00:05:04.835 compress/nitrox: not in enabled drivers build config 00:05:04.835 compress/octeontx: not in enabled drivers build config 00:05:04.835 compress/zlib: not in enabled drivers build config 00:05:04.835 regex/*: missing internal dependency, "regexdev" 00:05:04.835 ml/*: missing internal dependency, "mldev" 00:05:04.835 vdpa/ifc: not in enabled drivers build config 00:05:04.835 vdpa/mlx5: not in enabled drivers build config 00:05:04.835 vdpa/nfp: not in enabled drivers build config 00:05:04.835 vdpa/sfc: not in enabled drivers build config 00:05:04.835 event/*: missing internal dependency, "eventdev" 00:05:04.835 baseband/*: missing internal dependency, "bbdev" 00:05:04.835 gpu/*: missing internal dependency, "gpudev" 00:05:04.835 00:05:04.835 00:05:04.835 Build targets in project: 85 00:05:04.835 00:05:04.835 DPDK 24.03.0 00:05:04.835 00:05:04.835 User defined options 00:05:04.835 buildtype : debug 00:05:04.835 default_library : shared 00:05:04.835 libdir : lib 00:05:04.835 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:04.835 b_sanitize : address 00:05:04.835 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:04.835 c_link_args : 00:05:04.835 cpu_instruction_set: native 00:05:04.835 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:04.835 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:04.835 enable_docs : false 00:05:04.835 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:04.835 enable_kmods : false 00:05:04.835 max_lcores : 128 00:05:04.835 tests : false 00:05:04.835 00:05:04.835 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:04.835 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:04.835 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:04.835 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:04.835 [3/268] Linking static target lib/librte_kvargs.a 00:05:04.835 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:04.835 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:04.835 [6/268] Linking static target lib/librte_log.a 00:05:05.106 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:05.106 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:05.106 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:05.366 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:05.366 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:05.366 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:05.366 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.366 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:05.366 [15/268] Linking static target lib/librte_telemetry.a 00:05:05.366 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:05.628 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:05.628 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:05.887 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.887 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:05.887 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:05.887 [22/268] Linking target lib/librte_log.so.24.1 00:05:05.887 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:05.887 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:05.887 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:06.147 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:06.147 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:06.147 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:06.147 [29/268] Linking target lib/librte_kvargs.so.24.1 00:05:06.147 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:06.406 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.406 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:06.406 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:06.406 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:06.406 [35/268] Linking target lib/librte_telemetry.so.24.1 00:05:06.406 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:06.406 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:06.406 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:06.406 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:06.665 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:06.665 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:06.665 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:06.665 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:06.665 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:06.924 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:06.924 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:06.924 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:06.924 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:07.204 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:07.204 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:07.204 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:07.204 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:07.204 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:07.464 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:07.464 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:07.464 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:07.464 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:07.723 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:07.723 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:07.723 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:07.723 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:07.723 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:07.983 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:07.983 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:07.983 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:07.983 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:08.242 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:08.242 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:08.242 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:08.242 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:08.242 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:08.242 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:08.242 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:08.501 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:08.501 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:08.501 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:08.501 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:08.501 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:08.501 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:08.759 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:08.759 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:08.759 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:08.759 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:08.759 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:09.017 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:09.017 [86/268] Linking static target lib/librte_eal.a 00:05:09.017 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:09.275 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:09.275 [89/268] Linking static target lib/librte_ring.a 00:05:09.275 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:09.275 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:09.275 [92/268] Linking static target lib/librte_rcu.a 00:05:09.275 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:09.275 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:09.532 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:09.532 [96/268] Linking static target lib/librte_mempool.a 00:05:09.532 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:09.790 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.790 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.790 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:09.790 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:09.790 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:09.790 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:09.791 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:10.049 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:10.049 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:10.049 [107/268] Linking static target lib/librte_net.a 00:05:10.308 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:10.308 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:10.308 [110/268] Linking static target lib/librte_mbuf.a 00:05:10.308 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:10.308 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:10.308 [113/268] Linking static target lib/librte_meter.a 00:05:10.572 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:10.572 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.572 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.572 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:10.870 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:10.870 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.128 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:11.129 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:11.387 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.647 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:11.647 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:11.647 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:11.647 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:11.647 [127/268] Linking static target lib/librte_pci.a 00:05:11.906 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:11.906 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:11.906 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:11.906 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:11.906 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:11.906 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:11.906 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:12.164 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:12.164 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.164 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:12.164 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:12.164 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:12.164 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:12.164 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:12.422 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:12.422 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:12.422 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:12.422 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:12.681 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:12.681 [147/268] Linking static target lib/librte_cmdline.a 00:05:12.681 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:12.940 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:12.940 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:12.940 [151/268] Linking static target lib/librte_timer.a 00:05:12.940 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:12.940 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:13.199 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:13.199 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:13.199 [156/268] Linking static target lib/librte_ethdev.a 00:05:13.199 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:13.457 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:13.457 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:13.457 [160/268] Linking static target lib/librte_compressdev.a 00:05:13.457 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.716 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:13.716 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:13.716 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:13.716 [165/268] Linking static target lib/librte_hash.a 00:05:13.975 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:13.975 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:13.975 [168/268] Linking static target lib/librte_dmadev.a 00:05:13.975 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:13.975 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:14.234 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:14.234 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:14.234 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.494 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:14.494 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.754 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:14.754 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:14.754 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.754 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:14.754 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:15.013 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:15.013 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.013 [183/268] Linking static target lib/librte_cryptodev.a 00:05:15.013 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:15.013 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:15.272 [186/268] Linking static target lib/librte_power.a 00:05:15.531 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:15.531 [188/268] Linking static target lib/librte_reorder.a 00:05:15.531 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:15.531 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:15.531 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:15.532 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:15.532 [193/268] Linking static target lib/librte_security.a 00:05:16.100 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.100 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:16.359 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.359 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.618 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:16.618 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:16.618 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:16.618 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:16.878 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:17.136 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:17.136 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:17.136 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:17.136 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:17.394 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:17.394 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:17.394 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.394 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:17.394 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:17.651 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:17.651 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:17.651 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:17.651 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:17.651 [216/268] Linking static target drivers/librte_bus_vdev.a 00:05:17.652 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:17.652 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:17.652 [219/268] Linking static target drivers/librte_bus_pci.a 00:05:17.652 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:17.652 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:17.909 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.909 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:18.167 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:18.167 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:18.167 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:18.167 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.150 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:20.082 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.082 [230/268] Linking target lib/librte_eal.so.24.1 00:05:20.339 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:20.339 [232/268] Linking target lib/librte_meter.so.24.1 00:05:20.339 [233/268] Linking target lib/librte_pci.so.24.1 00:05:20.339 [234/268] Linking target lib/librte_timer.so.24.1 00:05:20.339 [235/268] Linking target lib/librte_ring.so.24.1 00:05:20.339 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:20.339 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:20.340 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:20.340 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:20.340 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:20.340 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:20.340 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:20.597 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:20.597 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:20.597 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:20.597 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:20.597 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:20.597 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:20.597 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:20.854 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:20.854 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:20.854 [252/268] Linking target lib/librte_net.so.24.1 00:05:20.854 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:20.854 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:21.111 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:21.111 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:21.111 [257/268] Linking target lib/librte_hash.so.24.1 00:05:21.111 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:21.111 [259/268] Linking target lib/librte_security.so.24.1 00:05:21.369 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:21.933 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.190 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:22.190 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:22.449 [264/268] Linking target lib/librte_power.so.24.1 00:05:23.824 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:24.082 [266/268] Linking static target lib/librte_vhost.a 00:05:25.984 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.242 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:26.242 INFO: autodetecting backend as ninja 00:05:26.242 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:48.163 CC lib/ut/ut.o 00:05:48.163 CC lib/ut_mock/mock.o 00:05:48.163 CC lib/log/log.o 00:05:48.164 CC lib/log/log_flags.o 00:05:48.164 CC lib/log/log_deprecated.o 00:05:48.164 LIB libspdk_ut.a 00:05:48.164 LIB libspdk_log.a 00:05:48.164 SO libspdk_ut.so.2.0 00:05:48.164 LIB libspdk_ut_mock.a 00:05:48.164 SO libspdk_log.so.7.1 00:05:48.164 SO libspdk_ut_mock.so.6.0 00:05:48.164 SYMLINK libspdk_ut.so 00:05:48.164 SYMLINK libspdk_log.so 00:05:48.164 SYMLINK libspdk_ut_mock.so 00:05:48.164 CC lib/dma/dma.o 00:05:48.164 CC lib/ioat/ioat.o 00:05:48.164 CC lib/util/bit_array.o 00:05:48.164 CC lib/util/base64.o 00:05:48.164 CC lib/util/cpuset.o 00:05:48.164 CC lib/util/crc16.o 00:05:48.164 CC lib/util/crc32.o 00:05:48.164 CC lib/util/crc32c.o 00:05:48.164 CXX lib/trace_parser/trace.o 00:05:48.164 CC lib/vfio_user/host/vfio_user_pci.o 00:05:48.164 CC lib/vfio_user/host/vfio_user.o 00:05:48.164 CC lib/util/crc32_ieee.o 00:05:48.164 CC lib/util/crc64.o 00:05:48.164 LIB libspdk_dma.a 00:05:48.164 SO libspdk_dma.so.5.0 00:05:48.164 CC lib/util/dif.o 00:05:48.164 CC lib/util/fd.o 00:05:48.164 CC lib/util/fd_group.o 00:05:48.164 SYMLINK libspdk_dma.so 00:05:48.164 CC lib/util/file.o 00:05:48.164 CC lib/util/hexlify.o 00:05:48.164 CC lib/util/iov.o 00:05:48.164 LIB libspdk_ioat.a 00:05:48.164 SO libspdk_ioat.so.7.0 00:05:48.164 CC lib/util/math.o 00:05:48.164 CC lib/util/net.o 00:05:48.164 SYMLINK libspdk_ioat.so 00:05:48.164 CC lib/util/pipe.o 00:05:48.164 CC lib/util/strerror_tls.o 00:05:48.164 CC lib/util/string.o 00:05:48.164 LIB libspdk_vfio_user.a 00:05:48.164 CC lib/util/uuid.o 00:05:48.164 SO libspdk_vfio_user.so.5.0 00:05:48.164 CC lib/util/xor.o 00:05:48.164 SYMLINK libspdk_vfio_user.so 00:05:48.164 CC lib/util/zipf.o 00:05:48.164 CC lib/util/md5.o 00:05:48.164 LIB libspdk_util.a 00:05:48.164 SO libspdk_util.so.10.1 00:05:48.164 LIB libspdk_trace_parser.a 00:05:48.423 SYMLINK libspdk_util.so 00:05:48.423 SO libspdk_trace_parser.so.6.0 00:05:48.423 SYMLINK libspdk_trace_parser.so 00:05:48.423 CC lib/json/json_parse.o 00:05:48.423 CC lib/json/json_write.o 00:05:48.423 CC lib/json/json_util.o 00:05:48.423 CC lib/env_dpdk/env.o 00:05:48.423 CC lib/env_dpdk/memory.o 00:05:48.423 CC lib/env_dpdk/pci.o 00:05:48.423 CC lib/vmd/vmd.o 00:05:48.423 CC lib/idxd/idxd.o 00:05:48.423 CC lib/conf/conf.o 00:05:48.423 CC lib/rdma_utils/rdma_utils.o 00:05:48.682 LIB libspdk_conf.a 00:05:48.682 CC lib/vmd/led.o 00:05:48.682 CC lib/env_dpdk/init.o 00:05:48.682 SO libspdk_conf.so.6.0 00:05:48.682 LIB libspdk_json.a 00:05:48.682 LIB libspdk_rdma_utils.a 00:05:48.941 SYMLINK libspdk_conf.so 00:05:48.941 SO libspdk_rdma_utils.so.1.0 00:05:48.941 CC lib/env_dpdk/threads.o 00:05:48.941 SO libspdk_json.so.6.0 00:05:48.941 SYMLINK libspdk_rdma_utils.so 00:05:48.941 SYMLINK libspdk_json.so 00:05:48.941 CC lib/env_dpdk/pci_ioat.o 00:05:48.941 CC lib/env_dpdk/pci_virtio.o 00:05:48.942 CC lib/idxd/idxd_user.o 00:05:48.942 CC lib/env_dpdk/pci_vmd.o 00:05:48.942 CC lib/env_dpdk/pci_idxd.o 00:05:48.942 CC lib/env_dpdk/pci_event.o 00:05:48.942 CC lib/env_dpdk/sigbus_handler.o 00:05:48.942 CC lib/env_dpdk/pci_dpdk.o 00:05:49.200 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:49.200 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:49.200 CC lib/idxd/idxd_kernel.o 00:05:49.200 LIB libspdk_vmd.a 00:05:49.200 SO libspdk_vmd.so.6.0 00:05:49.200 LIB libspdk_idxd.a 00:05:49.458 SYMLINK libspdk_vmd.so 00:05:49.458 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:49.458 CC lib/jsonrpc/jsonrpc_server.o 00:05:49.458 CC lib/jsonrpc/jsonrpc_client.o 00:05:49.458 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:49.458 SO libspdk_idxd.so.12.1 00:05:49.458 CC lib/rdma_provider/common.o 00:05:49.458 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:49.458 SYMLINK libspdk_idxd.so 00:05:49.715 LIB libspdk_rdma_provider.a 00:05:49.715 LIB libspdk_jsonrpc.a 00:05:49.716 SO libspdk_rdma_provider.so.7.0 00:05:49.716 SO libspdk_jsonrpc.so.6.0 00:05:49.716 SYMLINK libspdk_rdma_provider.so 00:05:49.716 SYMLINK libspdk_jsonrpc.so 00:05:50.282 LIB libspdk_env_dpdk.a 00:05:50.282 CC lib/rpc/rpc.o 00:05:50.282 SO libspdk_env_dpdk.so.15.1 00:05:50.540 SYMLINK libspdk_env_dpdk.so 00:05:50.540 LIB libspdk_rpc.a 00:05:50.540 SO libspdk_rpc.so.6.0 00:05:50.540 SYMLINK libspdk_rpc.so 00:05:50.798 CC lib/trace/trace.o 00:05:50.798 CC lib/trace/trace_flags.o 00:05:50.798 CC lib/trace/trace_rpc.o 00:05:50.798 CC lib/keyring/keyring.o 00:05:50.798 CC lib/keyring/keyring_rpc.o 00:05:50.798 CC lib/notify/notify.o 00:05:50.798 CC lib/notify/notify_rpc.o 00:05:51.056 LIB libspdk_notify.a 00:05:51.056 SO libspdk_notify.so.6.0 00:05:51.056 LIB libspdk_keyring.a 00:05:51.056 LIB libspdk_trace.a 00:05:51.318 SO libspdk_keyring.so.2.0 00:05:51.318 SYMLINK libspdk_notify.so 00:05:51.318 SO libspdk_trace.so.11.0 00:05:51.318 SYMLINK libspdk_keyring.so 00:05:51.318 SYMLINK libspdk_trace.so 00:05:51.576 CC lib/sock/sock.o 00:05:51.576 CC lib/sock/sock_rpc.o 00:05:51.835 CC lib/thread/iobuf.o 00:05:51.835 CC lib/thread/thread.o 00:05:52.094 LIB libspdk_sock.a 00:05:52.094 SO libspdk_sock.so.10.0 00:05:52.352 SYMLINK libspdk_sock.so 00:05:52.620 CC lib/nvme/nvme_ctrlr.o 00:05:52.620 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:52.620 CC lib/nvme/nvme_ns_cmd.o 00:05:52.620 CC lib/nvme/nvme_fabric.o 00:05:52.620 CC lib/nvme/nvme_pcie.o 00:05:52.620 CC lib/nvme/nvme_ns.o 00:05:52.620 CC lib/nvme/nvme_pcie_common.o 00:05:52.620 CC lib/nvme/nvme_qpair.o 00:05:52.620 CC lib/nvme/nvme.o 00:05:53.208 CC lib/nvme/nvme_quirks.o 00:05:53.208 LIB libspdk_thread.a 00:05:53.465 SO libspdk_thread.so.11.0 00:05:53.465 SYMLINK libspdk_thread.so 00:05:53.465 CC lib/nvme/nvme_transport.o 00:05:53.465 CC lib/nvme/nvme_discovery.o 00:05:53.465 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:53.465 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:53.465 CC lib/nvme/nvme_tcp.o 00:05:53.723 CC lib/nvme/nvme_opal.o 00:05:53.723 CC lib/nvme/nvme_io_msg.o 00:05:53.982 CC lib/nvme/nvme_poll_group.o 00:05:53.982 CC lib/nvme/nvme_zns.o 00:05:53.982 CC lib/nvme/nvme_stubs.o 00:05:53.982 CC lib/nvme/nvme_auth.o 00:05:53.982 CC lib/nvme/nvme_cuse.o 00:05:54.242 CC lib/nvme/nvme_rdma.o 00:05:54.501 CC lib/accel/accel.o 00:05:54.501 CC lib/accel/accel_rpc.o 00:05:54.501 CC lib/accel/accel_sw.o 00:05:54.759 CC lib/blob/blobstore.o 00:05:54.759 CC lib/init/json_config.o 00:05:54.759 CC lib/virtio/virtio.o 00:05:54.759 CC lib/virtio/virtio_vhost_user.o 00:05:55.017 CC lib/virtio/virtio_vfio_user.o 00:05:55.017 CC lib/init/subsystem.o 00:05:55.017 CC lib/init/subsystem_rpc.o 00:05:55.017 CC lib/init/rpc.o 00:05:55.017 CC lib/blob/request.o 00:05:55.017 CC lib/blob/zeroes.o 00:05:55.275 CC lib/blob/blob_bs_dev.o 00:05:55.275 CC lib/virtio/virtio_pci.o 00:05:55.275 LIB libspdk_init.a 00:05:55.275 SO libspdk_init.so.6.0 00:05:55.275 CC lib/fsdev/fsdev.o 00:05:55.275 CC lib/fsdev/fsdev_rpc.o 00:05:55.275 CC lib/fsdev/fsdev_io.o 00:05:55.275 SYMLINK libspdk_init.so 00:05:55.534 LIB libspdk_virtio.a 00:05:55.534 CC lib/event/app_rpc.o 00:05:55.534 CC lib/event/log_rpc.o 00:05:55.534 CC lib/event/reactor.o 00:05:55.534 CC lib/event/app.o 00:05:55.534 SO libspdk_virtio.so.7.0 00:05:55.534 LIB libspdk_accel.a 00:05:55.534 SYMLINK libspdk_virtio.so 00:05:55.534 CC lib/event/scheduler_static.o 00:05:55.792 SO libspdk_accel.so.16.0 00:05:55.792 SYMLINK libspdk_accel.so 00:05:56.051 LIB libspdk_nvme.a 00:05:56.051 LIB libspdk_fsdev.a 00:05:56.051 CC lib/bdev/bdev.o 00:05:56.051 LIB libspdk_event.a 00:05:56.051 CC lib/bdev/bdev_rpc.o 00:05:56.051 CC lib/bdev/bdev_zone.o 00:05:56.051 CC lib/bdev/part.o 00:05:56.051 CC lib/bdev/scsi_nvme.o 00:05:56.051 SO libspdk_fsdev.so.2.0 00:05:56.051 SO libspdk_nvme.so.15.0 00:05:56.051 SO libspdk_event.so.14.0 00:05:56.051 SYMLINK libspdk_fsdev.so 00:05:56.309 SYMLINK libspdk_event.so 00:05:56.309 SYMLINK libspdk_nvme.so 00:05:56.309 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:57.244 LIB libspdk_fuse_dispatcher.a 00:05:57.244 SO libspdk_fuse_dispatcher.so.1.0 00:05:57.244 SYMLINK libspdk_fuse_dispatcher.so 00:05:58.677 LIB libspdk_blob.a 00:05:58.677 SO libspdk_blob.so.11.0 00:05:58.677 SYMLINK libspdk_blob.so 00:05:58.935 LIB libspdk_bdev.a 00:05:58.935 SO libspdk_bdev.so.17.0 00:05:58.935 CC lib/blobfs/blobfs.o 00:05:58.935 CC lib/blobfs/tree.o 00:05:58.935 CC lib/lvol/lvol.o 00:05:59.192 SYMLINK libspdk_bdev.so 00:05:59.450 CC lib/ublk/ublk.o 00:05:59.450 CC lib/ublk/ublk_rpc.o 00:05:59.450 CC lib/scsi/dev.o 00:05:59.450 CC lib/scsi/port.o 00:05:59.450 CC lib/scsi/lun.o 00:05:59.450 CC lib/nbd/nbd.o 00:05:59.450 CC lib/ftl/ftl_core.o 00:05:59.450 CC lib/nvmf/ctrlr.o 00:05:59.450 CC lib/nvmf/ctrlr_discovery.o 00:05:59.450 CC lib/nbd/nbd_rpc.o 00:05:59.730 CC lib/nvmf/ctrlr_bdev.o 00:05:59.730 CC lib/scsi/scsi.o 00:05:59.730 CC lib/ftl/ftl_init.o 00:05:59.730 CC lib/nvmf/subsystem.o 00:05:59.730 LIB libspdk_nbd.a 00:05:59.989 SO libspdk_nbd.so.7.0 00:05:59.989 CC lib/scsi/scsi_bdev.o 00:05:59.989 SYMLINK libspdk_nbd.so 00:05:59.989 CC lib/ftl/ftl_layout.o 00:05:59.989 CC lib/ftl/ftl_debug.o 00:05:59.989 LIB libspdk_blobfs.a 00:05:59.989 SO libspdk_blobfs.so.10.0 00:05:59.989 SYMLINK libspdk_blobfs.so 00:05:59.989 CC lib/ftl/ftl_io.o 00:05:59.989 LIB libspdk_ublk.a 00:06:00.248 CC lib/ftl/ftl_sb.o 00:06:00.248 SO libspdk_ublk.so.3.0 00:06:00.248 CC lib/ftl/ftl_l2p.o 00:06:00.248 LIB libspdk_lvol.a 00:06:00.248 SYMLINK libspdk_ublk.so 00:06:00.248 CC lib/ftl/ftl_l2p_flat.o 00:06:00.248 SO libspdk_lvol.so.10.0 00:06:00.248 SYMLINK libspdk_lvol.so 00:06:00.248 CC lib/ftl/ftl_nv_cache.o 00:06:00.248 CC lib/nvmf/nvmf.o 00:06:00.248 CC lib/nvmf/nvmf_rpc.o 00:06:00.248 CC lib/ftl/ftl_band.o 00:06:00.507 CC lib/nvmf/transport.o 00:06:00.507 CC lib/nvmf/tcp.o 00:06:00.507 CC lib/nvmf/stubs.o 00:06:00.507 CC lib/scsi/scsi_pr.o 00:06:00.765 CC lib/scsi/scsi_rpc.o 00:06:00.765 CC lib/nvmf/mdns_server.o 00:06:01.023 CC lib/scsi/task.o 00:06:01.023 CC lib/nvmf/rdma.o 00:06:01.281 LIB libspdk_scsi.a 00:06:01.281 CC lib/nvmf/auth.o 00:06:01.281 SO libspdk_scsi.so.9.0 00:06:01.281 CC lib/ftl/ftl_band_ops.o 00:06:01.281 SYMLINK libspdk_scsi.so 00:06:01.281 CC lib/ftl/ftl_writer.o 00:06:01.281 CC lib/ftl/ftl_rq.o 00:06:01.281 CC lib/ftl/ftl_reloc.o 00:06:01.538 CC lib/ftl/ftl_l2p_cache.o 00:06:01.538 CC lib/iscsi/conn.o 00:06:01.538 CC lib/ftl/ftl_p2l.o 00:06:01.538 CC lib/ftl/ftl_p2l_log.o 00:06:01.797 CC lib/iscsi/init_grp.o 00:06:01.797 CC lib/vhost/vhost.o 00:06:01.797 CC lib/ftl/mngt/ftl_mngt.o 00:06:02.055 CC lib/iscsi/iscsi.o 00:06:02.055 CC lib/vhost/vhost_rpc.o 00:06:02.055 CC lib/vhost/vhost_scsi.o 00:06:02.055 CC lib/vhost/vhost_blk.o 00:06:02.316 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:02.316 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:02.316 CC lib/iscsi/param.o 00:06:02.316 CC lib/iscsi/portal_grp.o 00:06:02.316 CC lib/iscsi/tgt_node.o 00:06:02.316 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:02.596 CC lib/iscsi/iscsi_subsystem.o 00:06:02.596 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:02.596 CC lib/iscsi/iscsi_rpc.o 00:06:02.596 CC lib/vhost/rte_vhost_user.o 00:06:02.855 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:02.855 CC lib/iscsi/task.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:03.113 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:03.113 CC lib/ftl/utils/ftl_conf.o 00:06:03.371 CC lib/ftl/utils/ftl_md.o 00:06:03.371 CC lib/ftl/utils/ftl_mempool.o 00:06:03.371 CC lib/ftl/utils/ftl_bitmap.o 00:06:03.371 CC lib/ftl/utils/ftl_property.o 00:06:03.371 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:03.371 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:03.630 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:03.630 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:03.630 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:03.630 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:03.630 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:03.889 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:03.889 LIB libspdk_nvmf.a 00:06:03.889 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:03.889 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:03.889 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:03.889 LIB libspdk_vhost.a 00:06:03.889 LIB libspdk_iscsi.a 00:06:03.889 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:03.889 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:03.890 SO libspdk_vhost.so.8.0 00:06:03.890 SO libspdk_nvmf.so.20.0 00:06:03.890 CC lib/ftl/base/ftl_base_dev.o 00:06:03.890 SO libspdk_iscsi.so.8.0 00:06:03.890 CC lib/ftl/base/ftl_base_bdev.o 00:06:03.890 CC lib/ftl/ftl_trace.o 00:06:04.148 SYMLINK libspdk_vhost.so 00:06:04.148 SYMLINK libspdk_iscsi.so 00:06:04.148 SYMLINK libspdk_nvmf.so 00:06:04.406 LIB libspdk_ftl.a 00:06:04.406 SO libspdk_ftl.so.9.0 00:06:04.666 SYMLINK libspdk_ftl.so 00:06:05.235 CC module/env_dpdk/env_dpdk_rpc.o 00:06:05.235 CC module/fsdev/aio/fsdev_aio.o 00:06:05.235 CC module/keyring/file/keyring.o 00:06:05.235 CC module/accel/ioat/accel_ioat.o 00:06:05.235 CC module/blob/bdev/blob_bdev.o 00:06:05.235 CC module/accel/error/accel_error.o 00:06:05.235 CC module/accel/dsa/accel_dsa.o 00:06:05.235 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:05.235 CC module/sock/posix/posix.o 00:06:05.235 CC module/keyring/linux/keyring.o 00:06:05.235 LIB libspdk_env_dpdk_rpc.a 00:06:05.235 SO libspdk_env_dpdk_rpc.so.6.0 00:06:05.493 SYMLINK libspdk_env_dpdk_rpc.so 00:06:05.493 CC module/keyring/linux/keyring_rpc.o 00:06:05.493 CC module/keyring/file/keyring_rpc.o 00:06:05.494 CC module/accel/ioat/accel_ioat_rpc.o 00:06:05.494 CC module/accel/error/accel_error_rpc.o 00:06:05.494 LIB libspdk_scheduler_dynamic.a 00:06:05.494 SO libspdk_scheduler_dynamic.so.4.0 00:06:05.494 LIB libspdk_keyring_linux.a 00:06:05.494 LIB libspdk_keyring_file.a 00:06:05.494 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:05.494 SO libspdk_keyring_linux.so.1.0 00:06:05.494 SYMLINK libspdk_scheduler_dynamic.so 00:06:05.494 SO libspdk_keyring_file.so.2.0 00:06:05.494 CC module/accel/dsa/accel_dsa_rpc.o 00:06:05.494 LIB libspdk_blob_bdev.a 00:06:05.494 LIB libspdk_accel_ioat.a 00:06:05.494 SO libspdk_blob_bdev.so.11.0 00:06:05.752 SO libspdk_accel_ioat.so.6.0 00:06:05.752 SYMLINK libspdk_keyring_linux.so 00:06:05.752 LIB libspdk_accel_error.a 00:06:05.752 SYMLINK libspdk_keyring_file.so 00:06:05.752 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:05.752 SO libspdk_accel_error.so.2.0 00:06:05.752 SYMLINK libspdk_blob_bdev.so 00:06:05.752 SYMLINK libspdk_accel_ioat.so 00:06:05.752 CC module/fsdev/aio/linux_aio_mgr.o 00:06:05.752 LIB libspdk_accel_dsa.a 00:06:05.752 CC module/scheduler/gscheduler/gscheduler.o 00:06:05.752 LIB libspdk_scheduler_dpdk_governor.a 00:06:05.752 SYMLINK libspdk_accel_error.so 00:06:05.752 SO libspdk_accel_dsa.so.5.0 00:06:05.752 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:05.752 CC module/accel/iaa/accel_iaa.o 00:06:05.752 SYMLINK libspdk_accel_dsa.so 00:06:05.752 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:05.752 CC module/accel/iaa/accel_iaa_rpc.o 00:06:06.012 LIB libspdk_scheduler_gscheduler.a 00:06:06.012 SO libspdk_scheduler_gscheduler.so.4.0 00:06:06.012 CC module/bdev/delay/vbdev_delay.o 00:06:06.012 CC module/bdev/error/vbdev_error.o 00:06:06.012 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:06.012 SYMLINK libspdk_scheduler_gscheduler.so 00:06:06.012 CC module/bdev/gpt/gpt.o 00:06:06.012 CC module/bdev/error/vbdev_error_rpc.o 00:06:06.012 CC module/blobfs/bdev/blobfs_bdev.o 00:06:06.012 LIB libspdk_accel_iaa.a 00:06:06.012 LIB libspdk_fsdev_aio.a 00:06:06.012 SO libspdk_accel_iaa.so.3.0 00:06:06.012 SO libspdk_fsdev_aio.so.1.0 00:06:06.012 CC module/bdev/lvol/vbdev_lvol.o 00:06:06.270 SYMLINK libspdk_accel_iaa.so 00:06:06.270 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:06.270 SYMLINK libspdk_fsdev_aio.so 00:06:06.270 CC module/bdev/gpt/vbdev_gpt.o 00:06:06.270 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:06.270 LIB libspdk_sock_posix.a 00:06:06.270 LIB libspdk_bdev_error.a 00:06:06.270 SO libspdk_bdev_error.so.6.0 00:06:06.270 SO libspdk_sock_posix.so.6.0 00:06:06.270 CC module/bdev/malloc/bdev_malloc.o 00:06:06.270 CC module/bdev/null/bdev_null.o 00:06:06.270 CC module/bdev/nvme/bdev_nvme.o 00:06:06.529 LIB libspdk_bdev_delay.a 00:06:06.529 LIB libspdk_blobfs_bdev.a 00:06:06.529 SYMLINK libspdk_bdev_error.so 00:06:06.529 SYMLINK libspdk_sock_posix.so 00:06:06.529 SO libspdk_bdev_delay.so.6.0 00:06:06.529 SO libspdk_blobfs_bdev.so.6.0 00:06:06.529 LIB libspdk_bdev_gpt.a 00:06:06.529 SYMLINK libspdk_bdev_delay.so 00:06:06.529 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:06.529 SYMLINK libspdk_blobfs_bdev.so 00:06:06.529 SO libspdk_bdev_gpt.so.6.0 00:06:06.529 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:06.529 CC module/bdev/raid/bdev_raid.o 00:06:06.529 CC module/bdev/passthru/vbdev_passthru.o 00:06:06.529 SYMLINK libspdk_bdev_gpt.so 00:06:06.529 CC module/bdev/raid/bdev_raid_rpc.o 00:06:06.788 CC module/bdev/null/bdev_null_rpc.o 00:06:06.788 LIB libspdk_bdev_lvol.a 00:06:06.788 CC module/bdev/split/vbdev_split.o 00:06:06.788 CC module/bdev/split/vbdev_split_rpc.o 00:06:06.788 SO libspdk_bdev_lvol.so.6.0 00:06:06.788 LIB libspdk_bdev_malloc.a 00:06:06.788 SYMLINK libspdk_bdev_lvol.so 00:06:06.788 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:06.788 SO libspdk_bdev_malloc.so.6.0 00:06:06.788 CC module/bdev/raid/bdev_raid_sb.o 00:06:06.788 LIB libspdk_bdev_null.a 00:06:06.788 SO libspdk_bdev_null.so.6.0 00:06:06.788 SYMLINK libspdk_bdev_malloc.so 00:06:06.788 CC module/bdev/nvme/nvme_rpc.o 00:06:06.788 CC module/bdev/nvme/bdev_mdns_client.o 00:06:07.047 CC module/bdev/nvme/vbdev_opal.o 00:06:07.047 SYMLINK libspdk_bdev_null.so 00:06:07.047 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:07.047 LIB libspdk_bdev_passthru.a 00:06:07.047 LIB libspdk_bdev_split.a 00:06:07.047 SO libspdk_bdev_split.so.6.0 00:06:07.047 SO libspdk_bdev_passthru.so.6.0 00:06:07.047 SYMLINK libspdk_bdev_split.so 00:06:07.047 CC module/bdev/raid/raid0.o 00:06:07.047 CC module/bdev/raid/raid1.o 00:06:07.047 SYMLINK libspdk_bdev_passthru.so 00:06:07.047 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:07.047 CC module/bdev/raid/concat.o 00:06:07.305 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:07.305 CC module/bdev/aio/bdev_aio.o 00:06:07.305 CC module/bdev/raid/raid5f.o 00:06:07.305 CC module/bdev/aio/bdev_aio_rpc.o 00:06:07.305 CC module/bdev/iscsi/bdev_iscsi.o 00:06:07.305 CC module/bdev/ftl/bdev_ftl.o 00:06:07.305 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:07.607 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:07.607 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:07.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:07.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:07.607 LIB libspdk_bdev_zone_block.a 00:06:07.607 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:07.607 SO libspdk_bdev_zone_block.so.6.0 00:06:07.607 LIB libspdk_bdev_aio.a 00:06:07.607 SO libspdk_bdev_aio.so.6.0 00:06:07.867 SYMLINK libspdk_bdev_zone_block.so 00:06:07.867 SYMLINK libspdk_bdev_aio.so 00:06:07.867 LIB libspdk_bdev_iscsi.a 00:06:07.867 SO libspdk_bdev_iscsi.so.6.0 00:06:07.867 LIB libspdk_bdev_raid.a 00:06:07.867 LIB libspdk_bdev_ftl.a 00:06:07.867 SYMLINK libspdk_bdev_iscsi.so 00:06:07.867 SO libspdk_bdev_raid.so.6.0 00:06:07.867 SO libspdk_bdev_ftl.so.6.0 00:06:08.125 SYMLINK libspdk_bdev_ftl.so 00:06:08.125 SYMLINK libspdk_bdev_raid.so 00:06:08.125 LIB libspdk_bdev_virtio.a 00:06:08.125 SO libspdk_bdev_virtio.so.6.0 00:06:08.125 SYMLINK libspdk_bdev_virtio.so 00:06:10.023 LIB libspdk_bdev_nvme.a 00:06:10.023 SO libspdk_bdev_nvme.so.7.1 00:06:10.023 SYMLINK libspdk_bdev_nvme.so 00:06:10.589 CC module/event/subsystems/sock/sock.o 00:06:10.589 CC module/event/subsystems/iobuf/iobuf.o 00:06:10.589 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:10.589 CC module/event/subsystems/vmd/vmd.o 00:06:10.589 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:10.589 CC module/event/subsystems/fsdev/fsdev.o 00:06:10.589 CC module/event/subsystems/scheduler/scheduler.o 00:06:10.589 CC module/event/subsystems/keyring/keyring.o 00:06:10.848 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:10.848 LIB libspdk_event_keyring.a 00:06:10.848 LIB libspdk_event_fsdev.a 00:06:10.848 LIB libspdk_event_sock.a 00:06:10.848 LIB libspdk_event_vmd.a 00:06:10.848 LIB libspdk_event_scheduler.a 00:06:10.848 LIB libspdk_event_iobuf.a 00:06:10.848 SO libspdk_event_fsdev.so.1.0 00:06:10.848 SO libspdk_event_sock.so.5.0 00:06:10.848 SO libspdk_event_keyring.so.1.0 00:06:10.848 SO libspdk_event_scheduler.so.4.0 00:06:10.848 SO libspdk_event_vmd.so.6.0 00:06:10.848 LIB libspdk_event_vhost_blk.a 00:06:10.848 SO libspdk_event_iobuf.so.3.0 00:06:10.848 SO libspdk_event_vhost_blk.so.3.0 00:06:10.848 SYMLINK libspdk_event_fsdev.so 00:06:10.848 SYMLINK libspdk_event_sock.so 00:06:10.848 SYMLINK libspdk_event_keyring.so 00:06:10.848 SYMLINK libspdk_event_scheduler.so 00:06:10.848 SYMLINK libspdk_event_vmd.so 00:06:10.848 SYMLINK libspdk_event_iobuf.so 00:06:10.848 SYMLINK libspdk_event_vhost_blk.so 00:06:11.415 CC module/event/subsystems/accel/accel.o 00:06:11.415 LIB libspdk_event_accel.a 00:06:11.415 SO libspdk_event_accel.so.6.0 00:06:11.698 SYMLINK libspdk_event_accel.so 00:06:11.957 CC module/event/subsystems/bdev/bdev.o 00:06:12.215 LIB libspdk_event_bdev.a 00:06:12.215 SO libspdk_event_bdev.so.6.0 00:06:12.215 SYMLINK libspdk_event_bdev.so 00:06:12.472 CC module/event/subsystems/nbd/nbd.o 00:06:12.472 CC module/event/subsystems/ublk/ublk.o 00:06:12.472 CC module/event/subsystems/scsi/scsi.o 00:06:12.472 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:12.731 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:12.731 LIB libspdk_event_nbd.a 00:06:12.731 LIB libspdk_event_ublk.a 00:06:12.731 SO libspdk_event_nbd.so.6.0 00:06:12.731 SO libspdk_event_ublk.so.3.0 00:06:12.731 LIB libspdk_event_scsi.a 00:06:12.731 SYMLINK libspdk_event_nbd.so 00:06:12.731 SYMLINK libspdk_event_ublk.so 00:06:12.989 SO libspdk_event_scsi.so.6.0 00:06:12.989 LIB libspdk_event_nvmf.a 00:06:12.989 SYMLINK libspdk_event_scsi.so 00:06:12.989 SO libspdk_event_nvmf.so.6.0 00:06:12.989 SYMLINK libspdk_event_nvmf.so 00:06:13.247 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:13.247 CC module/event/subsystems/iscsi/iscsi.o 00:06:13.505 LIB libspdk_event_vhost_scsi.a 00:06:13.505 SO libspdk_event_vhost_scsi.so.3.0 00:06:13.505 LIB libspdk_event_iscsi.a 00:06:13.505 SO libspdk_event_iscsi.so.6.0 00:06:13.505 SYMLINK libspdk_event_vhost_scsi.so 00:06:13.505 SYMLINK libspdk_event_iscsi.so 00:06:13.764 SO libspdk.so.6.0 00:06:13.764 SYMLINK libspdk.so 00:06:14.022 CC test/rpc_client/rpc_client_test.o 00:06:14.022 TEST_HEADER include/spdk/accel.h 00:06:14.022 TEST_HEADER include/spdk/accel_module.h 00:06:14.022 TEST_HEADER include/spdk/assert.h 00:06:14.022 TEST_HEADER include/spdk/barrier.h 00:06:14.022 TEST_HEADER include/spdk/base64.h 00:06:14.022 CXX app/trace/trace.o 00:06:14.022 TEST_HEADER include/spdk/bdev.h 00:06:14.022 TEST_HEADER include/spdk/bdev_module.h 00:06:14.022 TEST_HEADER include/spdk/bdev_zone.h 00:06:14.022 TEST_HEADER include/spdk/bit_array.h 00:06:14.022 TEST_HEADER include/spdk/bit_pool.h 00:06:14.022 TEST_HEADER include/spdk/blob_bdev.h 00:06:14.022 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:14.022 TEST_HEADER include/spdk/blobfs.h 00:06:14.022 TEST_HEADER include/spdk/blob.h 00:06:14.022 TEST_HEADER include/spdk/conf.h 00:06:14.022 TEST_HEADER include/spdk/config.h 00:06:14.022 TEST_HEADER include/spdk/cpuset.h 00:06:14.022 TEST_HEADER include/spdk/crc16.h 00:06:14.022 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:14.022 TEST_HEADER include/spdk/crc32.h 00:06:14.022 TEST_HEADER include/spdk/crc64.h 00:06:14.022 TEST_HEADER include/spdk/dif.h 00:06:14.022 TEST_HEADER include/spdk/dma.h 00:06:14.022 TEST_HEADER include/spdk/endian.h 00:06:14.022 TEST_HEADER include/spdk/env_dpdk.h 00:06:14.022 TEST_HEADER include/spdk/env.h 00:06:14.022 TEST_HEADER include/spdk/event.h 00:06:14.022 TEST_HEADER include/spdk/fd_group.h 00:06:14.022 TEST_HEADER include/spdk/fd.h 00:06:14.022 TEST_HEADER include/spdk/file.h 00:06:14.022 TEST_HEADER include/spdk/fsdev.h 00:06:14.022 TEST_HEADER include/spdk/fsdev_module.h 00:06:14.279 TEST_HEADER include/spdk/ftl.h 00:06:14.279 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:14.279 TEST_HEADER include/spdk/gpt_spec.h 00:06:14.279 TEST_HEADER include/spdk/hexlify.h 00:06:14.279 TEST_HEADER include/spdk/histogram_data.h 00:06:14.279 TEST_HEADER include/spdk/idxd.h 00:06:14.279 CC examples/ioat/perf/perf.o 00:06:14.279 TEST_HEADER include/spdk/idxd_spec.h 00:06:14.279 TEST_HEADER include/spdk/init.h 00:06:14.279 TEST_HEADER include/spdk/ioat.h 00:06:14.279 TEST_HEADER include/spdk/ioat_spec.h 00:06:14.279 TEST_HEADER include/spdk/iscsi_spec.h 00:06:14.279 TEST_HEADER include/spdk/json.h 00:06:14.279 TEST_HEADER include/spdk/jsonrpc.h 00:06:14.279 TEST_HEADER include/spdk/keyring.h 00:06:14.279 TEST_HEADER include/spdk/keyring_module.h 00:06:14.279 CC examples/util/zipf/zipf.o 00:06:14.279 TEST_HEADER include/spdk/likely.h 00:06:14.279 CC test/thread/poller_perf/poller_perf.o 00:06:14.279 TEST_HEADER include/spdk/log.h 00:06:14.279 TEST_HEADER include/spdk/lvol.h 00:06:14.279 TEST_HEADER include/spdk/md5.h 00:06:14.279 TEST_HEADER include/spdk/memory.h 00:06:14.279 TEST_HEADER include/spdk/mmio.h 00:06:14.280 TEST_HEADER include/spdk/nbd.h 00:06:14.280 TEST_HEADER include/spdk/net.h 00:06:14.280 TEST_HEADER include/spdk/notify.h 00:06:14.280 TEST_HEADER include/spdk/nvme.h 00:06:14.280 TEST_HEADER include/spdk/nvme_intel.h 00:06:14.280 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:14.280 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:14.280 TEST_HEADER include/spdk/nvme_spec.h 00:06:14.280 TEST_HEADER include/spdk/nvme_zns.h 00:06:14.280 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:14.280 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:14.280 TEST_HEADER include/spdk/nvmf.h 00:06:14.280 CC test/app/bdev_svc/bdev_svc.o 00:06:14.280 CC test/dma/test_dma/test_dma.o 00:06:14.280 TEST_HEADER include/spdk/nvmf_spec.h 00:06:14.280 TEST_HEADER include/spdk/nvmf_transport.h 00:06:14.280 TEST_HEADER include/spdk/opal.h 00:06:14.280 TEST_HEADER include/spdk/opal_spec.h 00:06:14.280 TEST_HEADER include/spdk/pci_ids.h 00:06:14.280 TEST_HEADER include/spdk/pipe.h 00:06:14.280 TEST_HEADER include/spdk/queue.h 00:06:14.280 TEST_HEADER include/spdk/reduce.h 00:06:14.280 TEST_HEADER include/spdk/rpc.h 00:06:14.280 CC test/env/mem_callbacks/mem_callbacks.o 00:06:14.280 TEST_HEADER include/spdk/scheduler.h 00:06:14.280 TEST_HEADER include/spdk/scsi.h 00:06:14.280 TEST_HEADER include/spdk/scsi_spec.h 00:06:14.280 TEST_HEADER include/spdk/sock.h 00:06:14.280 TEST_HEADER include/spdk/stdinc.h 00:06:14.280 TEST_HEADER include/spdk/string.h 00:06:14.280 TEST_HEADER include/spdk/thread.h 00:06:14.280 TEST_HEADER include/spdk/trace.h 00:06:14.280 TEST_HEADER include/spdk/trace_parser.h 00:06:14.280 TEST_HEADER include/spdk/tree.h 00:06:14.280 TEST_HEADER include/spdk/ublk.h 00:06:14.280 TEST_HEADER include/spdk/util.h 00:06:14.280 LINK rpc_client_test 00:06:14.280 TEST_HEADER include/spdk/uuid.h 00:06:14.280 TEST_HEADER include/spdk/version.h 00:06:14.280 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:14.280 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:14.280 TEST_HEADER include/spdk/vhost.h 00:06:14.280 TEST_HEADER include/spdk/vmd.h 00:06:14.280 TEST_HEADER include/spdk/xor.h 00:06:14.280 TEST_HEADER include/spdk/zipf.h 00:06:14.280 CXX test/cpp_headers/accel.o 00:06:14.280 LINK interrupt_tgt 00:06:14.280 LINK zipf 00:06:14.280 LINK poller_perf 00:06:14.537 LINK ioat_perf 00:06:14.537 LINK bdev_svc 00:06:14.537 CXX test/cpp_headers/accel_module.o 00:06:14.537 LINK spdk_trace 00:06:14.537 CC test/env/vtophys/vtophys.o 00:06:14.537 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:14.537 CC test/env/memory/memory_ut.o 00:06:14.537 CXX test/cpp_headers/assert.o 00:06:14.795 CC examples/ioat/verify/verify.o 00:06:14.795 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:14.795 CC app/trace_record/trace_record.o 00:06:14.795 LINK vtophys 00:06:14.795 LINK test_dma 00:06:14.795 LINK env_dpdk_post_init 00:06:14.795 CC app/nvmf_tgt/nvmf_main.o 00:06:14.795 CXX test/cpp_headers/barrier.o 00:06:14.796 LINK mem_callbacks 00:06:14.796 LINK verify 00:06:15.054 LINK spdk_trace_record 00:06:15.054 CXX test/cpp_headers/base64.o 00:06:15.054 LINK nvmf_tgt 00:06:15.054 CC test/env/pci/pci_ut.o 00:06:15.054 CXX test/cpp_headers/bdev.o 00:06:15.054 CC examples/sock/hello_world/hello_sock.o 00:06:15.312 CC examples/vmd/lsvmd/lsvmd.o 00:06:15.312 CC examples/thread/thread/thread_ex.o 00:06:15.312 LINK nvme_fuzz 00:06:15.312 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:15.312 CC examples/idxd/perf/perf.o 00:06:15.312 LINK lsvmd 00:06:15.312 CXX test/cpp_headers/bdev_module.o 00:06:15.312 CC app/iscsi_tgt/iscsi_tgt.o 00:06:15.312 LINK thread 00:06:15.603 LINK hello_sock 00:06:15.603 LINK pci_ut 00:06:15.603 CC app/spdk_tgt/spdk_tgt.o 00:06:15.603 CXX test/cpp_headers/bdev_zone.o 00:06:15.603 CC examples/vmd/led/led.o 00:06:15.603 LINK iscsi_tgt 00:06:15.603 LINK idxd_perf 00:06:15.860 CXX test/cpp_headers/bit_array.o 00:06:15.861 CC app/spdk_lspci/spdk_lspci.o 00:06:15.861 CC app/spdk_nvme_perf/perf.o 00:06:15.861 LINK led 00:06:15.861 LINK memory_ut 00:06:15.861 LINK spdk_tgt 00:06:15.861 CXX test/cpp_headers/bit_pool.o 00:06:15.861 LINK spdk_lspci 00:06:15.861 CC app/spdk_nvme_identify/identify.o 00:06:15.861 CC app/spdk_nvme_discover/discovery_aer.o 00:06:16.118 CC test/event/event_perf/event_perf.o 00:06:16.118 CXX test/cpp_headers/blob_bdev.o 00:06:16.118 CXX test/cpp_headers/blobfs_bdev.o 00:06:16.118 CC examples/nvme/hello_world/hello_world.o 00:06:16.118 LINK spdk_nvme_discover 00:06:16.118 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:16.118 CC examples/accel/perf/accel_perf.o 00:06:16.118 LINK event_perf 00:06:16.376 CXX test/cpp_headers/blobfs.o 00:06:16.376 LINK hello_world 00:06:16.376 CC test/app/histogram_perf/histogram_perf.o 00:06:16.376 CC test/nvme/aer/aer.o 00:06:16.634 LINK hello_fsdev 00:06:16.634 CXX test/cpp_headers/blob.o 00:06:16.634 CC test/event/reactor/reactor.o 00:06:16.634 LINK histogram_perf 00:06:16.634 CXX test/cpp_headers/conf.o 00:06:16.634 CC examples/nvme/reconnect/reconnect.o 00:06:16.634 LINK reactor 00:06:16.905 LINK spdk_nvme_perf 00:06:16.905 LINK accel_perf 00:06:16.905 LINK aer 00:06:16.905 CXX test/cpp_headers/config.o 00:06:16.905 CXX test/cpp_headers/cpuset.o 00:06:16.905 CC test/accel/dif/dif.o 00:06:16.905 CC test/event/reactor_perf/reactor_perf.o 00:06:16.905 CC test/blobfs/mkfs/mkfs.o 00:06:17.162 CXX test/cpp_headers/crc16.o 00:06:17.162 CC test/nvme/reset/reset.o 00:06:17.162 LINK spdk_nvme_identify 00:06:17.162 LINK reconnect 00:06:17.162 LINK reactor_perf 00:06:17.162 LINK mkfs 00:06:17.162 CXX test/cpp_headers/crc32.o 00:06:17.419 CC examples/blob/hello_world/hello_blob.o 00:06:17.419 CC test/lvol/esnap/esnap.o 00:06:17.419 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:17.419 LINK reset 00:06:17.419 CC app/spdk_top/spdk_top.o 00:06:17.419 CC test/event/app_repeat/app_repeat.o 00:06:17.419 CXX test/cpp_headers/crc64.o 00:06:17.419 LINK iscsi_fuzz 00:06:17.676 LINK hello_blob 00:06:17.676 LINK app_repeat 00:06:17.676 CC examples/blob/cli/blobcli.o 00:06:17.676 CXX test/cpp_headers/dif.o 00:06:17.676 CC test/nvme/sgl/sgl.o 00:06:17.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:17.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:17.935 CXX test/cpp_headers/dma.o 00:06:17.935 LINK dif 00:06:17.935 LINK nvme_manage 00:06:17.935 CC test/event/scheduler/scheduler.o 00:06:17.935 LINK sgl 00:06:17.935 CXX test/cpp_headers/endian.o 00:06:17.935 CC examples/nvme/arbitration/arbitration.o 00:06:18.193 CC examples/nvme/hotplug/hotplug.o 00:06:18.193 LINK scheduler 00:06:18.193 CXX test/cpp_headers/env_dpdk.o 00:06:18.193 CC test/app/jsoncat/jsoncat.o 00:06:18.193 LINK blobcli 00:06:18.193 CC test/nvme/e2edp/nvme_dp.o 00:06:18.193 LINK vhost_fuzz 00:06:18.452 LINK jsoncat 00:06:18.452 CXX test/cpp_headers/env.o 00:06:18.452 LINK arbitration 00:06:18.452 LINK hotplug 00:06:18.452 CXX test/cpp_headers/event.o 00:06:18.452 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:18.452 LINK spdk_top 00:06:18.711 CC test/app/stub/stub.o 00:06:18.711 LINK nvme_dp 00:06:18.711 CC examples/bdev/hello_world/hello_bdev.o 00:06:18.711 CC test/bdev/bdevio/bdevio.o 00:06:18.711 CXX test/cpp_headers/fd_group.o 00:06:18.711 CC examples/nvme/abort/abort.o 00:06:18.711 LINK cmb_copy 00:06:18.711 CC test/nvme/overhead/overhead.o 00:06:18.711 LINK stub 00:06:18.971 CXX test/cpp_headers/fd.o 00:06:18.971 LINK hello_bdev 00:06:18.971 CC app/vhost/vhost.o 00:06:18.971 CC test/nvme/err_injection/err_injection.o 00:06:18.971 CC test/nvme/startup/startup.o 00:06:18.971 CXX test/cpp_headers/file.o 00:06:18.971 CC test/nvme/reserve/reserve.o 00:06:19.229 LINK vhost 00:06:19.229 LINK abort 00:06:19.229 LINK overhead 00:06:19.229 LINK bdevio 00:06:19.229 LINK err_injection 00:06:19.229 LINK startup 00:06:19.229 CXX test/cpp_headers/fsdev.o 00:06:19.229 CC examples/bdev/bdevperf/bdevperf.o 00:06:19.229 LINK reserve 00:06:19.229 CXX test/cpp_headers/fsdev_module.o 00:06:19.487 CXX test/cpp_headers/ftl.o 00:06:19.487 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:19.487 CC test/nvme/simple_copy/simple_copy.o 00:06:19.487 CC test/nvme/boot_partition/boot_partition.o 00:06:19.487 CC app/spdk_dd/spdk_dd.o 00:06:19.487 CC test/nvme/connect_stress/connect_stress.o 00:06:19.487 CXX test/cpp_headers/fuse_dispatcher.o 00:06:19.487 CC test/nvme/compliance/nvme_compliance.o 00:06:19.487 LINK pmr_persistence 00:06:19.746 LINK boot_partition 00:06:19.746 CXX test/cpp_headers/gpt_spec.o 00:06:19.746 LINK simple_copy 00:06:19.746 LINK connect_stress 00:06:19.746 CC app/fio/nvme/fio_plugin.o 00:06:19.746 CXX test/cpp_headers/hexlify.o 00:06:19.746 LINK spdk_dd 00:06:19.746 CC app/fio/bdev/fio_plugin.o 00:06:20.004 CC test/nvme/fused_ordering/fused_ordering.o 00:06:20.004 LINK nvme_compliance 00:06:20.004 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:20.004 CXX test/cpp_headers/histogram_data.o 00:06:20.004 CC test/nvme/fdp/fdp.o 00:06:20.004 CXX test/cpp_headers/idxd.o 00:06:20.263 LINK doorbell_aers 00:06:20.263 LINK fused_ordering 00:06:20.263 CXX test/cpp_headers/idxd_spec.o 00:06:20.263 CC test/nvme/cuse/cuse.o 00:06:20.263 LINK bdevperf 00:06:20.263 CXX test/cpp_headers/init.o 00:06:20.263 LINK spdk_nvme 00:06:20.522 CXX test/cpp_headers/ioat.o 00:06:20.522 CXX test/cpp_headers/ioat_spec.o 00:06:20.522 CXX test/cpp_headers/iscsi_spec.o 00:06:20.522 CXX test/cpp_headers/json.o 00:06:20.522 LINK spdk_bdev 00:06:20.522 CXX test/cpp_headers/jsonrpc.o 00:06:20.522 LINK fdp 00:06:20.522 CXX test/cpp_headers/keyring.o 00:06:20.522 CXX test/cpp_headers/keyring_module.o 00:06:20.522 CXX test/cpp_headers/likely.o 00:06:20.522 CXX test/cpp_headers/log.o 00:06:20.522 CXX test/cpp_headers/lvol.o 00:06:20.781 CXX test/cpp_headers/md5.o 00:06:20.781 CC examples/nvmf/nvmf/nvmf.o 00:06:20.781 CXX test/cpp_headers/memory.o 00:06:20.781 CXX test/cpp_headers/mmio.o 00:06:20.781 CXX test/cpp_headers/nbd.o 00:06:20.781 CXX test/cpp_headers/net.o 00:06:20.781 CXX test/cpp_headers/notify.o 00:06:20.781 CXX test/cpp_headers/nvme.o 00:06:20.781 CXX test/cpp_headers/nvme_intel.o 00:06:21.039 CXX test/cpp_headers/nvme_ocssd.o 00:06:21.039 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:21.039 CXX test/cpp_headers/nvme_spec.o 00:06:21.039 CXX test/cpp_headers/nvme_zns.o 00:06:21.039 CXX test/cpp_headers/nvmf_cmd.o 00:06:21.039 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:21.039 CXX test/cpp_headers/nvmf.o 00:06:21.039 LINK nvmf 00:06:21.039 CXX test/cpp_headers/nvmf_spec.o 00:06:21.039 CXX test/cpp_headers/nvmf_transport.o 00:06:21.297 CXX test/cpp_headers/opal.o 00:06:21.297 CXX test/cpp_headers/opal_spec.o 00:06:21.297 CXX test/cpp_headers/pci_ids.o 00:06:21.297 CXX test/cpp_headers/pipe.o 00:06:21.297 CXX test/cpp_headers/queue.o 00:06:21.297 CXX test/cpp_headers/reduce.o 00:06:21.297 CXX test/cpp_headers/rpc.o 00:06:21.297 CXX test/cpp_headers/scheduler.o 00:06:21.297 CXX test/cpp_headers/scsi.o 00:06:21.297 CXX test/cpp_headers/scsi_spec.o 00:06:21.297 CXX test/cpp_headers/sock.o 00:06:21.297 CXX test/cpp_headers/stdinc.o 00:06:21.297 CXX test/cpp_headers/string.o 00:06:21.556 CXX test/cpp_headers/thread.o 00:06:21.556 CXX test/cpp_headers/trace.o 00:06:21.556 CXX test/cpp_headers/trace_parser.o 00:06:21.556 CXX test/cpp_headers/tree.o 00:06:21.556 CXX test/cpp_headers/ublk.o 00:06:21.556 CXX test/cpp_headers/util.o 00:06:21.556 CXX test/cpp_headers/uuid.o 00:06:21.556 CXX test/cpp_headers/version.o 00:06:21.556 CXX test/cpp_headers/vfio_user_pci.o 00:06:21.556 CXX test/cpp_headers/vfio_user_spec.o 00:06:21.556 CXX test/cpp_headers/vhost.o 00:06:21.556 CXX test/cpp_headers/vmd.o 00:06:21.815 CXX test/cpp_headers/xor.o 00:06:21.815 CXX test/cpp_headers/zipf.o 00:06:22.073 LINK cuse 00:06:23.977 LINK esnap 00:06:24.544 00:06:24.544 real 1m31.959s 00:06:24.544 user 8m7.449s 00:06:24.544 sys 1m39.413s 00:06:24.544 07:03:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:24.544 07:03:06 make -- common/autotest_common.sh@10 -- $ set +x 00:06:24.544 ************************************ 00:06:24.544 END TEST make 00:06:24.544 ************************************ 00:06:24.803 07:03:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:24.803 07:03:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:24.803 07:03:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:24.803 07:03:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.803 07:03:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:24.803 07:03:06 -- pm/common@44 -- $ pid=5472 00:06:24.803 07:03:06 -- pm/common@50 -- $ kill -TERM 5472 00:06:24.803 07:03:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.803 07:03:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:24.803 07:03:06 -- pm/common@44 -- $ pid=5473 00:06:24.803 07:03:06 -- pm/common@50 -- $ kill -TERM 5473 00:06:24.803 07:03:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:24.803 07:03:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:24.803 07:03:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.803 07:03:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.803 07:03:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.803 07:03:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.803 07:03:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.803 07:03:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.803 07:03:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.803 07:03:07 -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.803 07:03:07 -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.803 07:03:07 -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.803 07:03:07 -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.803 07:03:07 -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.803 07:03:07 -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.803 07:03:07 -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.803 07:03:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.803 07:03:07 -- scripts/common.sh@344 -- # case "$op" in 00:06:24.803 07:03:07 -- scripts/common.sh@345 -- # : 1 00:06:24.803 07:03:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.803 07:03:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.803 07:03:07 -- scripts/common.sh@365 -- # decimal 1 00:06:25.063 07:03:07 -- scripts/common.sh@353 -- # local d=1 00:06:25.063 07:03:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.063 07:03:07 -- scripts/common.sh@355 -- # echo 1 00:06:25.063 07:03:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.063 07:03:07 -- scripts/common.sh@366 -- # decimal 2 00:06:25.063 07:03:07 -- scripts/common.sh@353 -- # local d=2 00:06:25.063 07:03:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.063 07:03:07 -- scripts/common.sh@355 -- # echo 2 00:06:25.063 07:03:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.063 07:03:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.063 07:03:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.063 07:03:07 -- scripts/common.sh@368 -- # return 0 00:06:25.063 07:03:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.063 07:03:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.063 --rc genhtml_branch_coverage=1 00:06:25.063 --rc genhtml_function_coverage=1 00:06:25.063 --rc genhtml_legend=1 00:06:25.063 --rc geninfo_all_blocks=1 00:06:25.063 --rc geninfo_unexecuted_blocks=1 00:06:25.063 00:06:25.063 ' 00:06:25.063 07:03:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.063 --rc genhtml_branch_coverage=1 00:06:25.063 --rc genhtml_function_coverage=1 00:06:25.063 --rc genhtml_legend=1 00:06:25.063 --rc geninfo_all_blocks=1 00:06:25.063 --rc geninfo_unexecuted_blocks=1 00:06:25.063 00:06:25.063 ' 00:06:25.063 07:03:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.063 --rc genhtml_branch_coverage=1 00:06:25.063 --rc genhtml_function_coverage=1 00:06:25.063 --rc genhtml_legend=1 00:06:25.063 --rc geninfo_all_blocks=1 00:06:25.063 --rc geninfo_unexecuted_blocks=1 00:06:25.063 00:06:25.063 ' 00:06:25.063 07:03:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.063 --rc genhtml_branch_coverage=1 00:06:25.063 --rc genhtml_function_coverage=1 00:06:25.063 --rc genhtml_legend=1 00:06:25.063 --rc geninfo_all_blocks=1 00:06:25.063 --rc geninfo_unexecuted_blocks=1 00:06:25.063 00:06:25.063 ' 00:06:25.063 07:03:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.063 07:03:07 -- nvmf/common.sh@7 -- # uname -s 00:06:25.063 07:03:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.063 07:03:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.063 07:03:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.063 07:03:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.063 07:03:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.063 07:03:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.063 07:03:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.063 07:03:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.063 07:03:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.063 07:03:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.063 07:03:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:06:25.063 07:03:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:06:25.063 07:03:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.063 07:03:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.063 07:03:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.063 07:03:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.063 07:03:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.063 07:03:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.063 07:03:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.063 07:03:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.063 07:03:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.063 07:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 07:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 07:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 07:03:07 -- paths/export.sh@5 -- # export PATH 00:06:25.063 07:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 07:03:07 -- nvmf/common.sh@51 -- # : 0 00:06:25.063 07:03:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.063 07:03:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.063 07:03:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.063 07:03:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.063 07:03:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.063 07:03:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.063 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.063 07:03:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.063 07:03:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.063 07:03:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.063 07:03:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:25.063 07:03:07 -- spdk/autotest.sh@32 -- # uname -s 00:06:25.063 07:03:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:25.063 07:03:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:25.063 07:03:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:25.063 07:03:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:25.063 07:03:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:25.063 07:03:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:25.063 07:03:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:25.063 07:03:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:25.063 07:03:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54522 00:06:25.063 07:03:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:25.063 07:03:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:25.063 07:03:07 -- pm/common@17 -- # local monitor 00:06:25.063 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.063 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.063 07:03:07 -- pm/common@21 -- # date +%s 00:06:25.063 07:03:07 -- pm/common@25 -- # sleep 1 00:06:25.063 07:03:07 -- pm/common@21 -- # date +%s 00:06:25.063 07:03:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086187 00:06:25.063 07:03:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086187 00:06:25.063 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086187_collect-cpu-load.pm.log 00:06:25.063 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086187_collect-vmstat.pm.log 00:06:26.001 07:03:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:26.001 07:03:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:26.001 07:03:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.001 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 07:03:08 -- spdk/autotest.sh@59 -- # create_test_list 00:06:26.001 07:03:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:26.001 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.273 07:03:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:26.273 07:03:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:26.273 07:03:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:26.273 07:03:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:26.273 07:03:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:26.273 07:03:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:26.273 07:03:08 -- common/autotest_common.sh@1457 -- # uname 00:06:26.273 07:03:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:26.273 07:03:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:26.273 07:03:08 -- common/autotest_common.sh@1477 -- # uname 00:06:26.273 07:03:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:26.273 07:03:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:26.273 07:03:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:26.273 lcov: LCOV version 1.15 00:06:26.273 07:03:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:44.372 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:44.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:59.280 07:03:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:59.280 07:03:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.280 07:03:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.280 07:03:41 -- spdk/autotest.sh@78 -- # rm -f 00:06:59.280 07:03:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:59.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.105 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:00.105 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:00.105 07:03:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:00.105 07:03:42 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:00.105 07:03:42 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:00.105 07:03:42 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:00.105 07:03:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.105 07:03:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:00.105 07:03:42 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:00.105 07:03:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.105 07:03:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:00.105 07:03:42 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:00.105 07:03:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.105 07:03:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:07:00.105 07:03:42 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:00.105 07:03:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:00.105 07:03:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:07:00.105 07:03:42 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:00.105 07:03:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:00.105 07:03:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:00.105 07:03:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:00.105 07:03:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:00.105 07:03:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:00.105 07:03:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:00.105 07:03:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:00.105 07:03:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:00.105 No valid GPT data, bailing 00:07:00.105 07:03:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:00.105 07:03:42 -- scripts/common.sh@394 -- # pt= 00:07:00.105 07:03:42 -- scripts/common.sh@395 -- # return 1 00:07:00.105 07:03:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:00.105 1+0 records in 00:07:00.105 1+0 records out 00:07:00.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621366 s, 169 MB/s 00:07:00.105 07:03:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:00.105 07:03:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:00.105 07:03:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:00.105 07:03:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:00.105 07:03:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:00.105 No valid GPT data, bailing 00:07:00.105 07:03:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:00.105 07:03:42 -- scripts/common.sh@394 -- # pt= 00:07:00.105 07:03:42 -- scripts/common.sh@395 -- # return 1 00:07:00.105 07:03:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:00.105 1+0 records in 00:07:00.105 1+0 records out 00:07:00.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516608 s, 203 MB/s 00:07:00.105 07:03:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:00.105 07:03:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:00.105 07:03:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:00.105 07:03:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:00.105 07:03:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:00.363 No valid GPT data, bailing 00:07:00.363 07:03:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:00.363 07:03:42 -- scripts/common.sh@394 -- # pt= 00:07:00.363 07:03:42 -- scripts/common.sh@395 -- # return 1 00:07:00.363 07:03:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:00.363 1+0 records in 00:07:00.363 1+0 records out 00:07:00.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055416 s, 189 MB/s 00:07:00.363 07:03:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:00.363 07:03:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:00.363 07:03:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:00.363 07:03:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:00.363 07:03:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:00.363 No valid GPT data, bailing 00:07:00.363 07:03:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:00.363 07:03:42 -- scripts/common.sh@394 -- # pt= 00:07:00.363 07:03:42 -- scripts/common.sh@395 -- # return 1 00:07:00.363 07:03:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:00.363 1+0 records in 00:07:00.363 1+0 records out 00:07:00.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00370286 s, 283 MB/s 00:07:00.363 07:03:42 -- spdk/autotest.sh@105 -- # sync 00:07:00.363 07:03:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:00.363 07:03:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:00.363 07:03:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:02.910 07:03:44 -- spdk/autotest.sh@111 -- # uname -s 00:07:02.910 07:03:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:02.910 07:03:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:02.910 07:03:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:03.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.428 Hugepages 00:07:03.428 node hugesize free / total 00:07:03.428 node0 1048576kB 0 / 0 00:07:03.428 node0 2048kB 0 / 0 00:07:03.428 00:07:03.428 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:03.428 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:03.428 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:03.687 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:03.687 07:03:45 -- spdk/autotest.sh@117 -- # uname -s 00:07:03.687 07:03:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:03.687 07:03:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:03.687 07:03:45 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:04.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.622 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:04.881 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:04.881 07:03:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:05.819 07:03:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:05.819 07:03:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:05.819 07:03:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:05.819 07:03:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:05.819 07:03:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:05.819 07:03:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:05.819 07:03:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:05.819 07:03:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:05.819 07:03:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:06.077 07:03:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:06.077 07:03:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:06.077 07:03:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:06.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.335 Waiting for block devices as requested 00:07:06.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.594 07:03:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.594 07:03:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.594 07:03:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.594 07:03:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.594 07:03:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1543 -- # continue 00:07:06.594 07:03:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.594 07:03:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:06.594 07:03:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.594 07:03:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.594 07:03:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.594 07:03:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.594 07:03:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.594 07:03:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.594 07:03:48 -- common/autotest_common.sh@1543 -- # continue 00:07:06.594 07:03:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:06.594 07:03:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.594 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.594 07:03:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:06.594 07:03:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.594 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.594 07:03:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:07.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:07.160 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.418 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.418 07:03:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:07.418 07:03:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.418 07:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.418 07:03:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:07.418 07:03:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:07.418 07:03:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:07.418 07:03:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:07.418 07:03:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:07.418 07:03:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:07.418 07:03:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:07.418 07:03:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:07.418 07:03:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:07.418 07:03:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:07.418 07:03:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:07.418 07:03:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:07.418 07:03:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:07.418 07:03:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:07.418 07:03:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:07.418 07:03:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.418 07:03:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:07.418 07:03:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.418 07:03:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.418 07:03:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.418 07:03:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:07.418 07:03:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.418 07:03:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.418 07:03:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:07.418 07:03:49 -- common/autotest_common.sh@1572 -- # return 0 00:07:07.418 07:03:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:07.418 07:03:49 -- common/autotest_common.sh@1580 -- # return 0 00:07:07.418 07:03:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:07.418 07:03:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:07.418 07:03:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:07.418 07:03:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:07.418 07:03:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:07.418 07:03:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.418 07:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.418 07:03:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:07.418 07:03:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:07.418 07:03:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.418 07:03:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.418 07:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.418 ************************************ 00:07:07.418 START TEST env 00:07:07.418 ************************************ 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:07.676 * Looking for test storage... 00:07:07.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.676 07:03:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.676 07:03:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.676 07:03:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.676 07:03:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.676 07:03:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.676 07:03:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.676 07:03:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.676 07:03:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.676 07:03:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.676 07:03:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.676 07:03:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.676 07:03:49 env -- scripts/common.sh@344 -- # case "$op" in 00:07:07.676 07:03:49 env -- scripts/common.sh@345 -- # : 1 00:07:07.676 07:03:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.676 07:03:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.676 07:03:49 env -- scripts/common.sh@365 -- # decimal 1 00:07:07.676 07:03:49 env -- scripts/common.sh@353 -- # local d=1 00:07:07.676 07:03:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.676 07:03:49 env -- scripts/common.sh@355 -- # echo 1 00:07:07.676 07:03:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.676 07:03:49 env -- scripts/common.sh@366 -- # decimal 2 00:07:07.676 07:03:49 env -- scripts/common.sh@353 -- # local d=2 00:07:07.676 07:03:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.676 07:03:49 env -- scripts/common.sh@355 -- # echo 2 00:07:07.676 07:03:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.676 07:03:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.676 07:03:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.676 07:03:49 env -- scripts/common.sh@368 -- # return 0 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.676 --rc genhtml_branch_coverage=1 00:07:07.676 --rc genhtml_function_coverage=1 00:07:07.676 --rc genhtml_legend=1 00:07:07.676 --rc geninfo_all_blocks=1 00:07:07.676 --rc geninfo_unexecuted_blocks=1 00:07:07.676 00:07:07.676 ' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.676 --rc genhtml_branch_coverage=1 00:07:07.676 --rc genhtml_function_coverage=1 00:07:07.676 --rc genhtml_legend=1 00:07:07.676 --rc geninfo_all_blocks=1 00:07:07.676 --rc geninfo_unexecuted_blocks=1 00:07:07.676 00:07:07.676 ' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.676 --rc genhtml_branch_coverage=1 00:07:07.676 --rc genhtml_function_coverage=1 00:07:07.676 --rc genhtml_legend=1 00:07:07.676 --rc geninfo_all_blocks=1 00:07:07.676 --rc geninfo_unexecuted_blocks=1 00:07:07.676 00:07:07.676 ' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.676 --rc genhtml_branch_coverage=1 00:07:07.676 --rc genhtml_function_coverage=1 00:07:07.676 --rc genhtml_legend=1 00:07:07.676 --rc geninfo_all_blocks=1 00:07:07.676 --rc geninfo_unexecuted_blocks=1 00:07:07.676 00:07:07.676 ' 00:07:07.676 07:03:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.676 07:03:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.676 07:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:07:07.676 ************************************ 00:07:07.676 START TEST env_memory 00:07:07.676 ************************************ 00:07:07.676 07:03:49 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.676 00:07:07.676 00:07:07.676 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.676 http://cunit.sourceforge.net/ 00:07:07.676 00:07:07.676 00:07:07.676 Suite: memory 00:07:07.935 Test: alloc and free memory map ...[2024-11-20 07:03:49.978012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:07.935 passed 00:07:07.935 Test: mem map translation ...[2024-11-20 07:03:50.027349] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:07.935 [2024-11-20 07:03:50.027406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:07.935 [2024-11-20 07:03:50.027482] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:07.935 [2024-11-20 07:03:50.027508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:07.935 passed 00:07:07.935 Test: mem map registration ...[2024-11-20 07:03:50.104693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:07.935 [2024-11-20 07:03:50.104751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:07.935 passed 00:07:08.194 Test: mem map adjacent registrations ...passed 00:07:08.194 00:07:08.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.194 suites 1 1 n/a 0 0 00:07:08.194 tests 4 4 4 0 0 00:07:08.194 asserts 152 152 152 0 n/a 00:07:08.194 00:07:08.194 Elapsed time = 0.263 seconds 00:07:08.194 00:07:08.194 real 0m0.313s 00:07:08.194 user 0m0.275s 00:07:08.194 sys 0m0.028s 00:07:08.194 07:03:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.194 07:03:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:08.194 ************************************ 00:07:08.194 END TEST env_memory 00:07:08.194 ************************************ 00:07:08.194 07:03:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:08.194 07:03:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.194 07:03:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.195 07:03:50 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.195 ************************************ 00:07:08.195 START TEST env_vtophys 00:07:08.195 ************************************ 00:07:08.195 07:03:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:08.195 EAL: lib.eal log level changed from notice to debug 00:07:08.195 EAL: Detected lcore 0 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 1 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 2 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 3 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 4 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 5 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 6 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 7 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 8 as core 0 on socket 0 00:07:08.195 EAL: Detected lcore 9 as core 0 on socket 0 00:07:08.195 EAL: Maximum logical cores by configuration: 128 00:07:08.195 EAL: Detected CPU lcores: 10 00:07:08.195 EAL: Detected NUMA nodes: 1 00:07:08.195 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:08.195 EAL: Detected shared linkage of DPDK 00:07:08.195 EAL: No shared files mode enabled, IPC will be disabled 00:07:08.195 EAL: Selected IOVA mode 'PA' 00:07:08.195 EAL: Probing VFIO support... 00:07:08.195 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:08.195 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:08.195 EAL: Ask a virtual area of 0x2e000 bytes 00:07:08.195 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:08.195 EAL: Setting up physically contiguous memory... 00:07:08.195 EAL: Setting maximum number of open files to 524288 00:07:08.195 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:08.195 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:08.195 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.195 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:08.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.195 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.195 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:08.195 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:08.195 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.195 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:08.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.195 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.195 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:08.195 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:08.195 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.195 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:08.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.195 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.195 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:08.195 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:08.195 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.195 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:08.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.195 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.195 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:08.195 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:08.195 EAL: Hugepages will be freed exactly as allocated. 00:07:08.195 EAL: No shared files mode enabled, IPC is disabled 00:07:08.195 EAL: No shared files mode enabled, IPC is disabled 00:07:08.455 EAL: TSC frequency is ~2290000 KHz 00:07:08.455 EAL: Main lcore 0 is ready (tid=7f9f5b0c7a40;cpuset=[0]) 00:07:08.455 EAL: Trying to obtain current memory policy. 00:07:08.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.455 EAL: Restoring previous memory policy: 0 00:07:08.455 EAL: request: mp_malloc_sync 00:07:08.455 EAL: No shared files mode enabled, IPC is disabled 00:07:08.455 EAL: Heap on socket 0 was expanded by 2MB 00:07:08.455 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:08.455 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:08.455 EAL: Mem event callback 'spdk:(nil)' registered 00:07:08.455 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:08.455 00:07:08.455 00:07:08.455 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.455 http://cunit.sourceforge.net/ 00:07:08.455 00:07:08.455 00:07:08.455 Suite: components_suite 00:07:08.715 Test: vtophys_malloc_test ...passed 00:07:08.715 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:08.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.715 EAL: Restoring previous memory policy: 4 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was expanded by 4MB 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was shrunk by 4MB 00:07:08.715 EAL: Trying to obtain current memory policy. 00:07:08.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.715 EAL: Restoring previous memory policy: 4 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was expanded by 6MB 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was shrunk by 6MB 00:07:08.715 EAL: Trying to obtain current memory policy. 00:07:08.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.715 EAL: Restoring previous memory policy: 4 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was expanded by 10MB 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was shrunk by 10MB 00:07:08.715 EAL: Trying to obtain current memory policy. 00:07:08.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.715 EAL: Restoring previous memory policy: 4 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was expanded by 18MB 00:07:08.715 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.715 EAL: request: mp_malloc_sync 00:07:08.715 EAL: No shared files mode enabled, IPC is disabled 00:07:08.715 EAL: Heap on socket 0 was shrunk by 18MB 00:07:08.975 EAL: Trying to obtain current memory policy. 00:07:08.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.975 EAL: Restoring previous memory policy: 4 00:07:08.975 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.975 EAL: request: mp_malloc_sync 00:07:08.975 EAL: No shared files mode enabled, IPC is disabled 00:07:08.975 EAL: Heap on socket 0 was expanded by 34MB 00:07:08.975 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.975 EAL: request: mp_malloc_sync 00:07:08.975 EAL: No shared files mode enabled, IPC is disabled 00:07:08.975 EAL: Heap on socket 0 was shrunk by 34MB 00:07:08.975 EAL: Trying to obtain current memory policy. 00:07:08.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.975 EAL: Restoring previous memory policy: 4 00:07:08.975 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.975 EAL: request: mp_malloc_sync 00:07:08.975 EAL: No shared files mode enabled, IPC is disabled 00:07:08.975 EAL: Heap on socket 0 was expanded by 66MB 00:07:09.234 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.234 EAL: request: mp_malloc_sync 00:07:09.234 EAL: No shared files mode enabled, IPC is disabled 00:07:09.234 EAL: Heap on socket 0 was shrunk by 66MB 00:07:09.234 EAL: Trying to obtain current memory policy. 00:07:09.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:09.234 EAL: Restoring previous memory policy: 4 00:07:09.234 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.234 EAL: request: mp_malloc_sync 00:07:09.234 EAL: No shared files mode enabled, IPC is disabled 00:07:09.234 EAL: Heap on socket 0 was expanded by 130MB 00:07:09.493 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.493 EAL: request: mp_malloc_sync 00:07:09.493 EAL: No shared files mode enabled, IPC is disabled 00:07:09.493 EAL: Heap on socket 0 was shrunk by 130MB 00:07:09.755 EAL: Trying to obtain current memory policy. 00:07:09.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:09.755 EAL: Restoring previous memory policy: 4 00:07:09.755 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.755 EAL: request: mp_malloc_sync 00:07:09.755 EAL: No shared files mode enabled, IPC is disabled 00:07:09.755 EAL: Heap on socket 0 was expanded by 258MB 00:07:10.329 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.329 EAL: request: mp_malloc_sync 00:07:10.329 EAL: No shared files mode enabled, IPC is disabled 00:07:10.329 EAL: Heap on socket 0 was shrunk by 258MB 00:07:10.897 EAL: Trying to obtain current memory policy. 00:07:10.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.897 EAL: Restoring previous memory policy: 4 00:07:10.898 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.898 EAL: request: mp_malloc_sync 00:07:10.898 EAL: No shared files mode enabled, IPC is disabled 00:07:10.898 EAL: Heap on socket 0 was expanded by 514MB 00:07:12.275 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.275 EAL: request: mp_malloc_sync 00:07:12.275 EAL: No shared files mode enabled, IPC is disabled 00:07:12.275 EAL: Heap on socket 0 was shrunk by 514MB 00:07:13.212 EAL: Trying to obtain current memory policy. 00:07:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:13.212 EAL: Restoring previous memory policy: 4 00:07:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.212 EAL: request: mp_malloc_sync 00:07:13.212 EAL: No shared files mode enabled, IPC is disabled 00:07:13.212 EAL: Heap on socket 0 was expanded by 1026MB 00:07:15.754 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.754 EAL: request: mp_malloc_sync 00:07:15.754 EAL: No shared files mode enabled, IPC is disabled 00:07:15.754 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:17.669 passed 00:07:17.669 00:07:17.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.669 suites 1 1 n/a 0 0 00:07:17.669 tests 2 2 2 0 0 00:07:17.669 asserts 5796 5796 5796 0 n/a 00:07:17.669 00:07:17.669 Elapsed time = 8.923 seconds 00:07:17.669 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.669 EAL: request: mp_malloc_sync 00:07:17.669 EAL: No shared files mode enabled, IPC is disabled 00:07:17.669 EAL: Heap on socket 0 was shrunk by 2MB 00:07:17.669 EAL: No shared files mode enabled, IPC is disabled 00:07:17.669 EAL: No shared files mode enabled, IPC is disabled 00:07:17.669 EAL: No shared files mode enabled, IPC is disabled 00:07:17.669 00:07:17.669 real 0m9.257s 00:07:17.669 user 0m8.274s 00:07:17.669 sys 0m0.819s 00:07:17.669 07:03:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.669 07:03:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:17.669 ************************************ 00:07:17.669 END TEST env_vtophys 00:07:17.669 ************************************ 00:07:17.669 07:03:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:17.669 07:03:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.669 07:03:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.669 07:03:59 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.669 ************************************ 00:07:17.669 START TEST env_pci 00:07:17.669 ************************************ 00:07:17.669 07:03:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:17.669 00:07:17.669 00:07:17.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.669 http://cunit.sourceforge.net/ 00:07:17.669 00:07:17.669 00:07:17.669 Suite: pci 00:07:17.669 Test: pci_hook ...[2024-11-20 07:03:59.650897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56853 has claimed it 00:07:17.669 passed 00:07:17.669 00:07:17.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.669 suites 1 1 n/a 0 0 00:07:17.669 tests 1 1 1 0 0 00:07:17.669 asserts 25 25 25 0 n/a 00:07:17.669 00:07:17.669 Elapsed time = 0.009 secondsEAL: Cannot find device (10000:00:01.0) 00:07:17.669 EAL: Failed to attach device on primary process 00:07:17.669 00:07:17.669 00:07:17.669 real 0m0.105s 00:07:17.669 user 0m0.049s 00:07:17.669 sys 0m0.055s 00:07:17.669 07:03:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.669 07:03:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:17.669 ************************************ 00:07:17.669 END TEST env_pci 00:07:17.669 ************************************ 00:07:17.669 07:03:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:17.669 07:03:59 env -- env/env.sh@15 -- # uname 00:07:17.669 07:03:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:17.669 07:03:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:17.669 07:03:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:17.669 07:03:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.669 07:03:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.669 07:03:59 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.669 ************************************ 00:07:17.669 START TEST env_dpdk_post_init 00:07:17.669 ************************************ 00:07:17.669 07:03:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:17.669 EAL: Detected CPU lcores: 10 00:07:17.669 EAL: Detected NUMA nodes: 1 00:07:17.669 EAL: Detected shared linkage of DPDK 00:07:17.669 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:17.669 EAL: Selected IOVA mode 'PA' 00:07:17.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:17.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:17.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:17.928 Starting DPDK initialization... 00:07:17.928 Starting SPDK post initialization... 00:07:17.928 SPDK NVMe probe 00:07:17.928 Attaching to 0000:00:10.0 00:07:17.928 Attaching to 0000:00:11.0 00:07:17.928 Attached to 0000:00:10.0 00:07:17.928 Attached to 0000:00:11.0 00:07:17.928 Cleaning up... 00:07:17.928 00:07:17.928 real 0m0.286s 00:07:17.928 user 0m0.097s 00:07:17.928 sys 0m0.089s 00:07:17.928 07:04:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.928 07:04:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:17.928 ************************************ 00:07:17.928 END TEST env_dpdk_post_init 00:07:17.928 ************************************ 00:07:17.928 07:04:00 env -- env/env.sh@26 -- # uname 00:07:17.928 07:04:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:17.928 07:04:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:17.928 07:04:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.928 07:04:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.928 07:04:00 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.928 ************************************ 00:07:17.928 START TEST env_mem_callbacks 00:07:17.928 ************************************ 00:07:17.928 07:04:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:18.187 EAL: Detected CPU lcores: 10 00:07:18.187 EAL: Detected NUMA nodes: 1 00:07:18.187 EAL: Detected shared linkage of DPDK 00:07:18.187 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:18.187 EAL: Selected IOVA mode 'PA' 00:07:18.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:18.187 00:07:18.187 00:07:18.187 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.187 http://cunit.sourceforge.net/ 00:07:18.187 00:07:18.187 00:07:18.187 Suite: memory 00:07:18.187 Test: test ... 00:07:18.187 register 0x200000200000 2097152 00:07:18.187 malloc 3145728 00:07:18.187 register 0x200000400000 4194304 00:07:18.187 buf 0x2000004fffc0 len 3145728 PASSED 00:07:18.187 malloc 64 00:07:18.187 buf 0x2000004ffec0 len 64 PASSED 00:07:18.187 malloc 4194304 00:07:18.187 register 0x200000800000 6291456 00:07:18.187 buf 0x2000009fffc0 len 4194304 PASSED 00:07:18.187 free 0x2000004fffc0 3145728 00:07:18.187 free 0x2000004ffec0 64 00:07:18.187 unregister 0x200000400000 4194304 PASSED 00:07:18.187 free 0x2000009fffc0 4194304 00:07:18.187 unregister 0x200000800000 6291456 PASSED 00:07:18.187 malloc 8388608 00:07:18.187 register 0x200000400000 10485760 00:07:18.187 buf 0x2000005fffc0 len 8388608 PASSED 00:07:18.187 free 0x2000005fffc0 8388608 00:07:18.187 unregister 0x200000400000 10485760 PASSED 00:07:18.187 passed 00:07:18.187 00:07:18.187 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.187 suites 1 1 n/a 0 0 00:07:18.187 tests 1 1 1 0 0 00:07:18.187 asserts 15 15 15 0 n/a 00:07:18.187 00:07:18.187 Elapsed time = 0.091 seconds 00:07:18.187 00:07:18.187 real 0m0.294s 00:07:18.187 user 0m0.119s 00:07:18.187 sys 0m0.074s 00:07:18.187 07:04:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.187 ************************************ 00:07:18.187 END TEST env_mem_callbacks 00:07:18.187 ************************************ 00:07:18.187 07:04:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:18.445 00:07:18.445 real 0m10.810s 00:07:18.445 user 0m9.046s 00:07:18.445 sys 0m1.402s 00:07:18.445 07:04:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.445 07:04:00 env -- common/autotest_common.sh@10 -- # set +x 00:07:18.445 ************************************ 00:07:18.445 END TEST env 00:07:18.445 ************************************ 00:07:18.445 07:04:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:18.445 07:04:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.445 07:04:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.445 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:18.445 ************************************ 00:07:18.445 START TEST rpc 00:07:18.445 ************************************ 00:07:18.445 07:04:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:18.445 * Looking for test storage... 00:07:18.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.445 07:04:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.445 07:04:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.445 07:04:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.705 07:04:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.705 07:04:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.705 07:04:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.705 07:04:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.705 07:04:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.705 07:04:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:18.705 07:04:00 rpc -- scripts/common.sh@345 -- # : 1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.705 07:04:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.705 07:04:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@353 -- # local d=1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.705 07:04:00 rpc -- scripts/common.sh@355 -- # echo 1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.705 07:04:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@353 -- # local d=2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.705 07:04:00 rpc -- scripts/common.sh@355 -- # echo 2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.705 07:04:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.705 07:04:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.705 07:04:00 rpc -- scripts/common.sh@368 -- # return 0 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.705 --rc genhtml_branch_coverage=1 00:07:18.705 --rc genhtml_function_coverage=1 00:07:18.705 --rc genhtml_legend=1 00:07:18.705 --rc geninfo_all_blocks=1 00:07:18.705 --rc geninfo_unexecuted_blocks=1 00:07:18.705 00:07:18.705 ' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.705 --rc genhtml_branch_coverage=1 00:07:18.705 --rc genhtml_function_coverage=1 00:07:18.705 --rc genhtml_legend=1 00:07:18.705 --rc geninfo_all_blocks=1 00:07:18.705 --rc geninfo_unexecuted_blocks=1 00:07:18.705 00:07:18.705 ' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.705 --rc genhtml_branch_coverage=1 00:07:18.705 --rc genhtml_function_coverage=1 00:07:18.705 --rc genhtml_legend=1 00:07:18.705 --rc geninfo_all_blocks=1 00:07:18.705 --rc geninfo_unexecuted_blocks=1 00:07:18.705 00:07:18.705 ' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.705 --rc genhtml_branch_coverage=1 00:07:18.705 --rc genhtml_function_coverage=1 00:07:18.705 --rc genhtml_legend=1 00:07:18.705 --rc geninfo_all_blocks=1 00:07:18.705 --rc geninfo_unexecuted_blocks=1 00:07:18.705 00:07:18.705 ' 00:07:18.705 07:04:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56985 00:07:18.705 07:04:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.705 07:04:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:18.705 07:04:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56985 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 56985 ']' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.705 07:04:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.705 [2024-11-20 07:04:00.901240] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:18.705 [2024-11-20 07:04:00.901387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56985 ] 00:07:18.964 [2024-11-20 07:04:01.062213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.964 [2024-11-20 07:04:01.204384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:18.964 [2024-11-20 07:04:01.204451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56985' to capture a snapshot of events at runtime. 00:07:18.964 [2024-11-20 07:04:01.204462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.964 [2024-11-20 07:04:01.204472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.964 [2024-11-20 07:04:01.204479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56985 for offline analysis/debug. 00:07:18.964 [2024-11-20 07:04:01.205810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.338 07:04:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.338 07:04:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.338 07:04:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:20.338 07:04:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:20.338 07:04:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:20.338 07:04:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:20.338 07:04:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.338 07:04:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.339 07:04:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 ************************************ 00:07:20.339 START TEST rpc_integrity 00:07:20.339 ************************************ 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:20.339 { 00:07:20.339 "name": "Malloc0", 00:07:20.339 "aliases": [ 00:07:20.339 "ae132136-623e-475d-ac35-67bff9a7e68e" 00:07:20.339 ], 00:07:20.339 "product_name": "Malloc disk", 00:07:20.339 "block_size": 512, 00:07:20.339 "num_blocks": 16384, 00:07:20.339 "uuid": "ae132136-623e-475d-ac35-67bff9a7e68e", 00:07:20.339 "assigned_rate_limits": { 00:07:20.339 "rw_ios_per_sec": 0, 00:07:20.339 "rw_mbytes_per_sec": 0, 00:07:20.339 "r_mbytes_per_sec": 0, 00:07:20.339 "w_mbytes_per_sec": 0 00:07:20.339 }, 00:07:20.339 "claimed": false, 00:07:20.339 "zoned": false, 00:07:20.339 "supported_io_types": { 00:07:20.339 "read": true, 00:07:20.339 "write": true, 00:07:20.339 "unmap": true, 00:07:20.339 "flush": true, 00:07:20.339 "reset": true, 00:07:20.339 "nvme_admin": false, 00:07:20.339 "nvme_io": false, 00:07:20.339 "nvme_io_md": false, 00:07:20.339 "write_zeroes": true, 00:07:20.339 "zcopy": true, 00:07:20.339 "get_zone_info": false, 00:07:20.339 "zone_management": false, 00:07:20.339 "zone_append": false, 00:07:20.339 "compare": false, 00:07:20.339 "compare_and_write": false, 00:07:20.339 "abort": true, 00:07:20.339 "seek_hole": false, 00:07:20.339 "seek_data": false, 00:07:20.339 "copy": true, 00:07:20.339 "nvme_iov_md": false 00:07:20.339 }, 00:07:20.339 "memory_domains": [ 00:07:20.339 { 00:07:20.339 "dma_device_id": "system", 00:07:20.339 "dma_device_type": 1 00:07:20.339 }, 00:07:20.339 { 00:07:20.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.339 "dma_device_type": 2 00:07:20.339 } 00:07:20.339 ], 00:07:20.339 "driver_specific": {} 00:07:20.339 } 00:07:20.339 ]' 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 [2024-11-20 07:04:02.315026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:20.339 [2024-11-20 07:04:02.315116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.339 [2024-11-20 07:04:02.315176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:20.339 [2024-11-20 07:04:02.315216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.339 [2024-11-20 07:04:02.318223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.339 [2024-11-20 07:04:02.318279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:20.339 Passthru0 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.339 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.339 { 00:07:20.339 "name": "Malloc0", 00:07:20.339 "aliases": [ 00:07:20.339 "ae132136-623e-475d-ac35-67bff9a7e68e" 00:07:20.339 ], 00:07:20.339 "product_name": "Malloc disk", 00:07:20.339 "block_size": 512, 00:07:20.339 "num_blocks": 16384, 00:07:20.339 "uuid": "ae132136-623e-475d-ac35-67bff9a7e68e", 00:07:20.339 "assigned_rate_limits": { 00:07:20.339 "rw_ios_per_sec": 0, 00:07:20.339 "rw_mbytes_per_sec": 0, 00:07:20.339 "r_mbytes_per_sec": 0, 00:07:20.339 "w_mbytes_per_sec": 0 00:07:20.339 }, 00:07:20.339 "claimed": true, 00:07:20.339 "claim_type": "exclusive_write", 00:07:20.339 "zoned": false, 00:07:20.339 "supported_io_types": { 00:07:20.339 "read": true, 00:07:20.339 "write": true, 00:07:20.339 "unmap": true, 00:07:20.339 "flush": true, 00:07:20.339 "reset": true, 00:07:20.339 "nvme_admin": false, 00:07:20.339 "nvme_io": false, 00:07:20.339 "nvme_io_md": false, 00:07:20.339 "write_zeroes": true, 00:07:20.339 "zcopy": true, 00:07:20.339 "get_zone_info": false, 00:07:20.339 "zone_management": false, 00:07:20.339 "zone_append": false, 00:07:20.339 "compare": false, 00:07:20.339 "compare_and_write": false, 00:07:20.339 "abort": true, 00:07:20.339 "seek_hole": false, 00:07:20.339 "seek_data": false, 00:07:20.339 "copy": true, 00:07:20.339 "nvme_iov_md": false 00:07:20.339 }, 00:07:20.339 "memory_domains": [ 00:07:20.339 { 00:07:20.339 "dma_device_id": "system", 00:07:20.339 "dma_device_type": 1 00:07:20.339 }, 00:07:20.339 { 00:07:20.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.339 "dma_device_type": 2 00:07:20.339 } 00:07:20.339 ], 00:07:20.339 "driver_specific": {} 00:07:20.339 }, 00:07:20.339 { 00:07:20.339 "name": "Passthru0", 00:07:20.339 "aliases": [ 00:07:20.339 "475fef9e-b1d4-5ef2-9bc0-c71e3e45fdc7" 00:07:20.340 ], 00:07:20.340 "product_name": "passthru", 00:07:20.340 "block_size": 512, 00:07:20.340 "num_blocks": 16384, 00:07:20.340 "uuid": "475fef9e-b1d4-5ef2-9bc0-c71e3e45fdc7", 00:07:20.340 "assigned_rate_limits": { 00:07:20.340 "rw_ios_per_sec": 0, 00:07:20.340 "rw_mbytes_per_sec": 0, 00:07:20.340 "r_mbytes_per_sec": 0, 00:07:20.340 "w_mbytes_per_sec": 0 00:07:20.340 }, 00:07:20.340 "claimed": false, 00:07:20.340 "zoned": false, 00:07:20.340 "supported_io_types": { 00:07:20.340 "read": true, 00:07:20.340 "write": true, 00:07:20.340 "unmap": true, 00:07:20.340 "flush": true, 00:07:20.340 "reset": true, 00:07:20.340 "nvme_admin": false, 00:07:20.340 "nvme_io": false, 00:07:20.340 "nvme_io_md": false, 00:07:20.340 "write_zeroes": true, 00:07:20.340 "zcopy": true, 00:07:20.340 "get_zone_info": false, 00:07:20.340 "zone_management": false, 00:07:20.340 "zone_append": false, 00:07:20.340 "compare": false, 00:07:20.340 "compare_and_write": false, 00:07:20.340 "abort": true, 00:07:20.340 "seek_hole": false, 00:07:20.340 "seek_data": false, 00:07:20.340 "copy": true, 00:07:20.340 "nvme_iov_md": false 00:07:20.340 }, 00:07:20.340 "memory_domains": [ 00:07:20.340 { 00:07:20.340 "dma_device_id": "system", 00:07:20.340 "dma_device_type": 1 00:07:20.340 }, 00:07:20.340 { 00:07:20.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.340 "dma_device_type": 2 00:07:20.340 } 00:07:20.340 ], 00:07:20.340 "driver_specific": { 00:07:20.340 "passthru": { 00:07:20.340 "name": "Passthru0", 00:07:20.340 "base_bdev_name": "Malloc0" 00:07:20.340 } 00:07:20.340 } 00:07:20.340 } 00:07:20.340 ]' 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:20.340 07:04:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:20.340 00:07:20.340 real 0m0.284s 00:07:20.340 user 0m0.153s 00:07:20.340 sys 0m0.035s 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 ************************************ 00:07:20.340 END TEST rpc_integrity 00:07:20.340 ************************************ 00:07:20.340 07:04:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:20.340 07:04:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.340 07:04:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.340 07:04:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 ************************************ 00:07:20.340 START TEST rpc_plugins 00:07:20.340 ************************************ 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:20.340 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.340 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:20.340 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:20.340 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.340 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:20.340 { 00:07:20.340 "name": "Malloc1", 00:07:20.340 "aliases": [ 00:07:20.340 "a727332a-afd4-4e23-a1d4-bd4cc4591267" 00:07:20.340 ], 00:07:20.340 "product_name": "Malloc disk", 00:07:20.340 "block_size": 4096, 00:07:20.340 "num_blocks": 256, 00:07:20.340 "uuid": "a727332a-afd4-4e23-a1d4-bd4cc4591267", 00:07:20.340 "assigned_rate_limits": { 00:07:20.340 "rw_ios_per_sec": 0, 00:07:20.340 "rw_mbytes_per_sec": 0, 00:07:20.340 "r_mbytes_per_sec": 0, 00:07:20.340 "w_mbytes_per_sec": 0 00:07:20.340 }, 00:07:20.340 "claimed": false, 00:07:20.340 "zoned": false, 00:07:20.340 "supported_io_types": { 00:07:20.340 "read": true, 00:07:20.340 "write": true, 00:07:20.340 "unmap": true, 00:07:20.340 "flush": true, 00:07:20.340 "reset": true, 00:07:20.340 "nvme_admin": false, 00:07:20.340 "nvme_io": false, 00:07:20.340 "nvme_io_md": false, 00:07:20.340 "write_zeroes": true, 00:07:20.340 "zcopy": true, 00:07:20.340 "get_zone_info": false, 00:07:20.340 "zone_management": false, 00:07:20.340 "zone_append": false, 00:07:20.340 "compare": false, 00:07:20.340 "compare_and_write": false, 00:07:20.340 "abort": true, 00:07:20.340 "seek_hole": false, 00:07:20.340 "seek_data": false, 00:07:20.340 "copy": true, 00:07:20.340 "nvme_iov_md": false 00:07:20.340 }, 00:07:20.341 "memory_domains": [ 00:07:20.341 { 00:07:20.341 "dma_device_id": "system", 00:07:20.341 "dma_device_type": 1 00:07:20.341 }, 00:07:20.341 { 00:07:20.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.341 "dma_device_type": 2 00:07:20.341 } 00:07:20.341 ], 00:07:20.341 "driver_specific": {} 00:07:20.341 } 00:07:20.341 ]' 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:20.341 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:20.341 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:20.599 07:04:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:20.599 00:07:20.599 real 0m0.115s 00:07:20.599 user 0m0.069s 00:07:20.599 sys 0m0.010s 00:07:20.599 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.599 07:04:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:20.599 ************************************ 00:07:20.599 END TEST rpc_plugins 00:07:20.599 ************************************ 00:07:20.599 07:04:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:20.599 07:04:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.599 07:04:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.599 07:04:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.599 ************************************ 00:07:20.599 START TEST rpc_trace_cmd_test 00:07:20.599 ************************************ 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.599 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:20.599 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56985", 00:07:20.599 "tpoint_group_mask": "0x8", 00:07:20.599 "iscsi_conn": { 00:07:20.599 "mask": "0x2", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "scsi": { 00:07:20.599 "mask": "0x4", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "bdev": { 00:07:20.599 "mask": "0x8", 00:07:20.599 "tpoint_mask": "0xffffffffffffffff" 00:07:20.599 }, 00:07:20.599 "nvmf_rdma": { 00:07:20.599 "mask": "0x10", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "nvmf_tcp": { 00:07:20.599 "mask": "0x20", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "ftl": { 00:07:20.599 "mask": "0x40", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "blobfs": { 00:07:20.599 "mask": "0x80", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "dsa": { 00:07:20.599 "mask": "0x200", 00:07:20.599 "tpoint_mask": "0x0" 00:07:20.599 }, 00:07:20.599 "thread": { 00:07:20.600 "mask": "0x400", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "nvme_pcie": { 00:07:20.600 "mask": "0x800", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "iaa": { 00:07:20.600 "mask": "0x1000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "nvme_tcp": { 00:07:20.600 "mask": "0x2000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "bdev_nvme": { 00:07:20.600 "mask": "0x4000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "sock": { 00:07:20.600 "mask": "0x8000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "blob": { 00:07:20.600 "mask": "0x10000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "bdev_raid": { 00:07:20.600 "mask": "0x20000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 }, 00:07:20.600 "scheduler": { 00:07:20.600 "mask": "0x40000", 00:07:20.600 "tpoint_mask": "0x0" 00:07:20.600 } 00:07:20.600 }' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:20.600 00:07:20.600 real 0m0.203s 00:07:20.600 user 0m0.177s 00:07:20.600 sys 0m0.019s 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.600 07:04:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.600 ************************************ 00:07:20.600 END TEST rpc_trace_cmd_test 00:07:20.600 ************************************ 00:07:20.600 07:04:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:20.600 07:04:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:20.600 07:04:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:20.600 07:04:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.600 07:04:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.600 07:04:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 ************************************ 00:07:20.862 START TEST rpc_daemon_integrity 00:07:20.862 ************************************ 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.862 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:20.862 { 00:07:20.862 "name": "Malloc2", 00:07:20.862 "aliases": [ 00:07:20.862 "87a5754b-bd64-4c7c-a393-e8713e45d808" 00:07:20.862 ], 00:07:20.862 "product_name": "Malloc disk", 00:07:20.862 "block_size": 512, 00:07:20.862 "num_blocks": 16384, 00:07:20.862 "uuid": "87a5754b-bd64-4c7c-a393-e8713e45d808", 00:07:20.862 "assigned_rate_limits": { 00:07:20.862 "rw_ios_per_sec": 0, 00:07:20.862 "rw_mbytes_per_sec": 0, 00:07:20.862 "r_mbytes_per_sec": 0, 00:07:20.862 "w_mbytes_per_sec": 0 00:07:20.862 }, 00:07:20.862 "claimed": false, 00:07:20.862 "zoned": false, 00:07:20.862 "supported_io_types": { 00:07:20.862 "read": true, 00:07:20.862 "write": true, 00:07:20.862 "unmap": true, 00:07:20.862 "flush": true, 00:07:20.862 "reset": true, 00:07:20.862 "nvme_admin": false, 00:07:20.862 "nvme_io": false, 00:07:20.862 "nvme_io_md": false, 00:07:20.862 "write_zeroes": true, 00:07:20.862 "zcopy": true, 00:07:20.862 "get_zone_info": false, 00:07:20.862 "zone_management": false, 00:07:20.862 "zone_append": false, 00:07:20.862 "compare": false, 00:07:20.862 "compare_and_write": false, 00:07:20.862 "abort": true, 00:07:20.862 "seek_hole": false, 00:07:20.862 "seek_data": false, 00:07:20.863 "copy": true, 00:07:20.863 "nvme_iov_md": false 00:07:20.863 }, 00:07:20.863 "memory_domains": [ 00:07:20.863 { 00:07:20.863 "dma_device_id": "system", 00:07:20.863 "dma_device_type": 1 00:07:20.863 }, 00:07:20.863 { 00:07:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.863 "dma_device_type": 2 00:07:20.863 } 00:07:20.863 ], 00:07:20.863 "driver_specific": {} 00:07:20.863 } 00:07:20.863 ]' 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 [2024-11-20 07:04:02.989505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:20.863 [2024-11-20 07:04:02.989589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.863 [2024-11-20 07:04:02.989622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:20.863 [2024-11-20 07:04:02.989644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.863 [2024-11-20 07:04:02.992690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.863 [2024-11-20 07:04:02.992745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:20.863 Passthru0 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.863 07:04:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.863 { 00:07:20.863 "name": "Malloc2", 00:07:20.863 "aliases": [ 00:07:20.863 "87a5754b-bd64-4c7c-a393-e8713e45d808" 00:07:20.863 ], 00:07:20.863 "product_name": "Malloc disk", 00:07:20.863 "block_size": 512, 00:07:20.863 "num_blocks": 16384, 00:07:20.863 "uuid": "87a5754b-bd64-4c7c-a393-e8713e45d808", 00:07:20.863 "assigned_rate_limits": { 00:07:20.863 "rw_ios_per_sec": 0, 00:07:20.863 "rw_mbytes_per_sec": 0, 00:07:20.863 "r_mbytes_per_sec": 0, 00:07:20.863 "w_mbytes_per_sec": 0 00:07:20.863 }, 00:07:20.863 "claimed": true, 00:07:20.863 "claim_type": "exclusive_write", 00:07:20.863 "zoned": false, 00:07:20.863 "supported_io_types": { 00:07:20.863 "read": true, 00:07:20.863 "write": true, 00:07:20.863 "unmap": true, 00:07:20.863 "flush": true, 00:07:20.863 "reset": true, 00:07:20.863 "nvme_admin": false, 00:07:20.863 "nvme_io": false, 00:07:20.863 "nvme_io_md": false, 00:07:20.863 "write_zeroes": true, 00:07:20.863 "zcopy": true, 00:07:20.863 "get_zone_info": false, 00:07:20.863 "zone_management": false, 00:07:20.863 "zone_append": false, 00:07:20.863 "compare": false, 00:07:20.863 "compare_and_write": false, 00:07:20.863 "abort": true, 00:07:20.863 "seek_hole": false, 00:07:20.863 "seek_data": false, 00:07:20.863 "copy": true, 00:07:20.863 "nvme_iov_md": false 00:07:20.863 }, 00:07:20.863 "memory_domains": [ 00:07:20.863 { 00:07:20.863 "dma_device_id": "system", 00:07:20.863 "dma_device_type": 1 00:07:20.863 }, 00:07:20.863 { 00:07:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.863 "dma_device_type": 2 00:07:20.863 } 00:07:20.863 ], 00:07:20.863 "driver_specific": {} 00:07:20.863 }, 00:07:20.863 { 00:07:20.863 "name": "Passthru0", 00:07:20.863 "aliases": [ 00:07:20.863 "0bae15c9-5e62-5cda-9317-ccbd90da778b" 00:07:20.863 ], 00:07:20.863 "product_name": "passthru", 00:07:20.863 "block_size": 512, 00:07:20.863 "num_blocks": 16384, 00:07:20.863 "uuid": "0bae15c9-5e62-5cda-9317-ccbd90da778b", 00:07:20.863 "assigned_rate_limits": { 00:07:20.863 "rw_ios_per_sec": 0, 00:07:20.863 "rw_mbytes_per_sec": 0, 00:07:20.863 "r_mbytes_per_sec": 0, 00:07:20.863 "w_mbytes_per_sec": 0 00:07:20.863 }, 00:07:20.863 "claimed": false, 00:07:20.863 "zoned": false, 00:07:20.863 "supported_io_types": { 00:07:20.863 "read": true, 00:07:20.863 "write": true, 00:07:20.863 "unmap": true, 00:07:20.863 "flush": true, 00:07:20.863 "reset": true, 00:07:20.863 "nvme_admin": false, 00:07:20.863 "nvme_io": false, 00:07:20.863 "nvme_io_md": false, 00:07:20.863 "write_zeroes": true, 00:07:20.863 "zcopy": true, 00:07:20.863 "get_zone_info": false, 00:07:20.863 "zone_management": false, 00:07:20.863 "zone_append": false, 00:07:20.863 "compare": false, 00:07:20.863 "compare_and_write": false, 00:07:20.863 "abort": true, 00:07:20.863 "seek_hole": false, 00:07:20.863 "seek_data": false, 00:07:20.863 "copy": true, 00:07:20.863 "nvme_iov_md": false 00:07:20.863 }, 00:07:20.863 "memory_domains": [ 00:07:20.863 { 00:07:20.863 "dma_device_id": "system", 00:07:20.863 "dma_device_type": 1 00:07:20.863 }, 00:07:20.863 { 00:07:20.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.863 "dma_device_type": 2 00:07:20.863 } 00:07:20.863 ], 00:07:20.863 "driver_specific": { 00:07:20.863 "passthru": { 00:07:20.863 "name": "Passthru0", 00:07:20.863 "base_bdev_name": "Malloc2" 00:07:20.863 } 00:07:20.863 } 00:07:20.863 } 00:07:20.863 ]' 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:20.863 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:21.166 07:04:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:21.166 00:07:21.166 real 0m0.275s 00:07:21.166 user 0m0.144s 00:07:21.166 sys 0m0.032s 00:07:21.166 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.166 07:04:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.166 ************************************ 00:07:21.166 END TEST rpc_daemon_integrity 00:07:21.166 ************************************ 00:07:21.166 07:04:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:21.166 07:04:03 rpc -- rpc/rpc.sh@84 -- # killprocess 56985 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 56985 ']' 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@958 -- # kill -0 56985 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@959 -- # uname 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56985 00:07:21.166 killing process with pid 56985 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56985' 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@973 -- # kill 56985 00:07:21.166 07:04:03 rpc -- common/autotest_common.sh@978 -- # wait 56985 00:07:23.704 00:07:23.704 real 0m5.160s 00:07:23.704 user 0m5.610s 00:07:23.704 sys 0m0.733s 00:07:23.704 07:04:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.704 07:04:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.704 ************************************ 00:07:23.704 END TEST rpc 00:07:23.704 ************************************ 00:07:23.704 07:04:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:23.704 07:04:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.704 07:04:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.704 07:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.704 ************************************ 00:07:23.704 START TEST skip_rpc 00:07:23.704 ************************************ 00:07:23.704 07:04:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:23.704 * Looking for test storage... 00:07:23.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:23.704 07:04:05 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.704 07:04:05 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.704 07:04:05 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.962 07:04:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.962 07:04:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.963 07:04:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.963 --rc genhtml_branch_coverage=1 00:07:23.963 --rc genhtml_function_coverage=1 00:07:23.963 --rc genhtml_legend=1 00:07:23.963 --rc geninfo_all_blocks=1 00:07:23.963 --rc geninfo_unexecuted_blocks=1 00:07:23.963 00:07:23.963 ' 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.963 --rc genhtml_branch_coverage=1 00:07:23.963 --rc genhtml_function_coverage=1 00:07:23.963 --rc genhtml_legend=1 00:07:23.963 --rc geninfo_all_blocks=1 00:07:23.963 --rc geninfo_unexecuted_blocks=1 00:07:23.963 00:07:23.963 ' 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.963 --rc genhtml_branch_coverage=1 00:07:23.963 --rc genhtml_function_coverage=1 00:07:23.963 --rc genhtml_legend=1 00:07:23.963 --rc geninfo_all_blocks=1 00:07:23.963 --rc geninfo_unexecuted_blocks=1 00:07:23.963 00:07:23.963 ' 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.963 --rc genhtml_branch_coverage=1 00:07:23.963 --rc genhtml_function_coverage=1 00:07:23.963 --rc genhtml_legend=1 00:07:23.963 --rc geninfo_all_blocks=1 00:07:23.963 --rc geninfo_unexecuted_blocks=1 00:07:23.963 00:07:23.963 ' 00:07:23.963 07:04:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:23.963 07:04:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:23.963 07:04:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.963 07:04:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.963 ************************************ 00:07:23.963 START TEST skip_rpc 00:07:23.963 ************************************ 00:07:23.963 07:04:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:23.963 07:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57215 00:07:23.963 07:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:23.963 07:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.963 07:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:23.963 [2024-11-20 07:04:06.101443] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:23.963 [2024-11-20 07:04:06.101591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57215 ] 00:07:24.222 [2024-11-20 07:04:06.259275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.222 [2024-11-20 07:04:06.409013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57215 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57215 ']' 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57215 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57215 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.537 killing process with pid 57215 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57215' 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57215 00:07:29.537 07:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57215 00:07:32.072 00:07:32.072 real 0m7.852s 00:07:32.072 user 0m7.377s 00:07:32.072 sys 0m0.381s 00:07:32.072 07:04:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.072 07:04:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.072 ************************************ 00:07:32.072 END TEST skip_rpc 00:07:32.072 ************************************ 00:07:32.072 07:04:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:32.072 07:04:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.072 07:04:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.072 07:04:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.072 ************************************ 00:07:32.072 START TEST skip_rpc_with_json 00:07:32.072 ************************************ 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57319 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57319 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57319 ']' 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.072 07:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:32.072 [2024-11-20 07:04:14.027417] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:32.072 [2024-11-20 07:04:14.027550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57319 ] 00:07:32.072 [2024-11-20 07:04:14.196422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.072 [2024-11-20 07:04:14.327375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:33.452 [2024-11-20 07:04:15.288081] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:33.452 request: 00:07:33.452 { 00:07:33.452 "trtype": "tcp", 00:07:33.452 "method": "nvmf_get_transports", 00:07:33.452 "req_id": 1 00:07:33.452 } 00:07:33.452 Got JSON-RPC error response 00:07:33.452 response: 00:07:33.452 { 00:07:33.452 "code": -19, 00:07:33.452 "message": "No such device" 00:07:33.452 } 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:33.452 [2024-11-20 07:04:15.300211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.452 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:33.452 { 00:07:33.452 "subsystems": [ 00:07:33.452 { 00:07:33.452 "subsystem": "fsdev", 00:07:33.452 "config": [ 00:07:33.452 { 00:07:33.452 "method": "fsdev_set_opts", 00:07:33.452 "params": { 00:07:33.452 "fsdev_io_pool_size": 65535, 00:07:33.452 "fsdev_io_cache_size": 256 00:07:33.452 } 00:07:33.452 } 00:07:33.452 ] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "keyring", 00:07:33.452 "config": [] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "iobuf", 00:07:33.452 "config": [ 00:07:33.452 { 00:07:33.452 "method": "iobuf_set_options", 00:07:33.452 "params": { 00:07:33.452 "small_pool_count": 8192, 00:07:33.452 "large_pool_count": 1024, 00:07:33.452 "small_bufsize": 8192, 00:07:33.452 "large_bufsize": 135168, 00:07:33.452 "enable_numa": false 00:07:33.452 } 00:07:33.452 } 00:07:33.452 ] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "sock", 00:07:33.452 "config": [ 00:07:33.452 { 00:07:33.452 "method": "sock_set_default_impl", 00:07:33.452 "params": { 00:07:33.452 "impl_name": "posix" 00:07:33.452 } 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "method": "sock_impl_set_options", 00:07:33.452 "params": { 00:07:33.452 "impl_name": "ssl", 00:07:33.452 "recv_buf_size": 4096, 00:07:33.452 "send_buf_size": 4096, 00:07:33.452 "enable_recv_pipe": true, 00:07:33.452 "enable_quickack": false, 00:07:33.452 "enable_placement_id": 0, 00:07:33.452 "enable_zerocopy_send_server": true, 00:07:33.452 "enable_zerocopy_send_client": false, 00:07:33.452 "zerocopy_threshold": 0, 00:07:33.452 "tls_version": 0, 00:07:33.452 "enable_ktls": false 00:07:33.452 } 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "method": "sock_impl_set_options", 00:07:33.452 "params": { 00:07:33.452 "impl_name": "posix", 00:07:33.452 "recv_buf_size": 2097152, 00:07:33.452 "send_buf_size": 2097152, 00:07:33.452 "enable_recv_pipe": true, 00:07:33.452 "enable_quickack": false, 00:07:33.452 "enable_placement_id": 0, 00:07:33.452 "enable_zerocopy_send_server": true, 00:07:33.452 "enable_zerocopy_send_client": false, 00:07:33.452 "zerocopy_threshold": 0, 00:07:33.452 "tls_version": 0, 00:07:33.452 "enable_ktls": false 00:07:33.452 } 00:07:33.452 } 00:07:33.452 ] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "vmd", 00:07:33.452 "config": [] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "accel", 00:07:33.452 "config": [ 00:07:33.452 { 00:07:33.452 "method": "accel_set_options", 00:07:33.452 "params": { 00:07:33.452 "small_cache_size": 128, 00:07:33.452 "large_cache_size": 16, 00:07:33.452 "task_count": 2048, 00:07:33.452 "sequence_count": 2048, 00:07:33.452 "buf_count": 2048 00:07:33.452 } 00:07:33.452 } 00:07:33.452 ] 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "subsystem": "bdev", 00:07:33.452 "config": [ 00:07:33.452 { 00:07:33.452 "method": "bdev_set_options", 00:07:33.452 "params": { 00:07:33.452 "bdev_io_pool_size": 65535, 00:07:33.452 "bdev_io_cache_size": 256, 00:07:33.452 "bdev_auto_examine": true, 00:07:33.452 "iobuf_small_cache_size": 128, 00:07:33.452 "iobuf_large_cache_size": 16 00:07:33.452 } 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "method": "bdev_raid_set_options", 00:07:33.452 "params": { 00:07:33.452 "process_window_size_kb": 1024, 00:07:33.452 "process_max_bandwidth_mb_sec": 0 00:07:33.452 } 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "method": "bdev_iscsi_set_options", 00:07:33.452 "params": { 00:07:33.452 "timeout_sec": 30 00:07:33.452 } 00:07:33.452 }, 00:07:33.452 { 00:07:33.452 "method": "bdev_nvme_set_options", 00:07:33.452 "params": { 00:07:33.452 "action_on_timeout": "none", 00:07:33.452 "timeout_us": 0, 00:07:33.452 "timeout_admin_us": 0, 00:07:33.452 "keep_alive_timeout_ms": 10000, 00:07:33.452 "arbitration_burst": 0, 00:07:33.452 "low_priority_weight": 0, 00:07:33.452 "medium_priority_weight": 0, 00:07:33.452 "high_priority_weight": 0, 00:07:33.452 "nvme_adminq_poll_period_us": 10000, 00:07:33.452 "nvme_ioq_poll_period_us": 0, 00:07:33.452 "io_queue_requests": 0, 00:07:33.452 "delay_cmd_submit": true, 00:07:33.452 "transport_retry_count": 4, 00:07:33.452 "bdev_retry_count": 3, 00:07:33.452 "transport_ack_timeout": 0, 00:07:33.452 "ctrlr_loss_timeout_sec": 0, 00:07:33.452 "reconnect_delay_sec": 0, 00:07:33.452 "fast_io_fail_timeout_sec": 0, 00:07:33.452 "disable_auto_failback": false, 00:07:33.452 "generate_uuids": false, 00:07:33.452 "transport_tos": 0, 00:07:33.452 "nvme_error_stat": false, 00:07:33.452 "rdma_srq_size": 0, 00:07:33.452 "io_path_stat": false, 00:07:33.452 "allow_accel_sequence": false, 00:07:33.452 "rdma_max_cq_size": 0, 00:07:33.452 "rdma_cm_event_timeout_ms": 0, 00:07:33.452 "dhchap_digests": [ 00:07:33.452 "sha256", 00:07:33.452 "sha384", 00:07:33.452 "sha512" 00:07:33.452 ], 00:07:33.452 "dhchap_dhgroups": [ 00:07:33.452 "null", 00:07:33.452 "ffdhe2048", 00:07:33.452 "ffdhe3072", 00:07:33.453 "ffdhe4096", 00:07:33.453 "ffdhe6144", 00:07:33.453 "ffdhe8192" 00:07:33.453 ] 00:07:33.453 } 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "method": "bdev_nvme_set_hotplug", 00:07:33.453 "params": { 00:07:33.453 "period_us": 100000, 00:07:33.453 "enable": false 00:07:33.453 } 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "method": "bdev_wait_for_examine" 00:07:33.453 } 00:07:33.453 ] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "scsi", 00:07:33.453 "config": null 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "scheduler", 00:07:33.453 "config": [ 00:07:33.453 { 00:07:33.453 "method": "framework_set_scheduler", 00:07:33.453 "params": { 00:07:33.453 "name": "static" 00:07:33.453 } 00:07:33.453 } 00:07:33.453 ] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "vhost_scsi", 00:07:33.453 "config": [] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "vhost_blk", 00:07:33.453 "config": [] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "ublk", 00:07:33.453 "config": [] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "nbd", 00:07:33.453 "config": [] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "nvmf", 00:07:33.453 "config": [ 00:07:33.453 { 00:07:33.453 "method": "nvmf_set_config", 00:07:33.453 "params": { 00:07:33.453 "discovery_filter": "match_any", 00:07:33.453 "admin_cmd_passthru": { 00:07:33.453 "identify_ctrlr": false 00:07:33.453 }, 00:07:33.453 "dhchap_digests": [ 00:07:33.453 "sha256", 00:07:33.453 "sha384", 00:07:33.453 "sha512" 00:07:33.453 ], 00:07:33.453 "dhchap_dhgroups": [ 00:07:33.453 "null", 00:07:33.453 "ffdhe2048", 00:07:33.453 "ffdhe3072", 00:07:33.453 "ffdhe4096", 00:07:33.453 "ffdhe6144", 00:07:33.453 "ffdhe8192" 00:07:33.453 ] 00:07:33.453 } 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "method": "nvmf_set_max_subsystems", 00:07:33.453 "params": { 00:07:33.453 "max_subsystems": 1024 00:07:33.453 } 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "method": "nvmf_set_crdt", 00:07:33.453 "params": { 00:07:33.453 "crdt1": 0, 00:07:33.453 "crdt2": 0, 00:07:33.453 "crdt3": 0 00:07:33.453 } 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "method": "nvmf_create_transport", 00:07:33.453 "params": { 00:07:33.453 "trtype": "TCP", 00:07:33.453 "max_queue_depth": 128, 00:07:33.453 "max_io_qpairs_per_ctrlr": 127, 00:07:33.453 "in_capsule_data_size": 4096, 00:07:33.453 "max_io_size": 131072, 00:07:33.453 "io_unit_size": 131072, 00:07:33.453 "max_aq_depth": 128, 00:07:33.453 "num_shared_buffers": 511, 00:07:33.453 "buf_cache_size": 4294967295, 00:07:33.453 "dif_insert_or_strip": false, 00:07:33.453 "zcopy": false, 00:07:33.453 "c2h_success": true, 00:07:33.453 "sock_priority": 0, 00:07:33.453 "abort_timeout_sec": 1, 00:07:33.453 "ack_timeout": 0, 00:07:33.453 "data_wr_pool_size": 0 00:07:33.453 } 00:07:33.453 } 00:07:33.453 ] 00:07:33.453 }, 00:07:33.453 { 00:07:33.453 "subsystem": "iscsi", 00:07:33.453 "config": [ 00:07:33.453 { 00:07:33.453 "method": "iscsi_set_options", 00:07:33.453 "params": { 00:07:33.453 "node_base": "iqn.2016-06.io.spdk", 00:07:33.453 "max_sessions": 128, 00:07:33.453 "max_connections_per_session": 2, 00:07:33.453 "max_queue_depth": 64, 00:07:33.453 "default_time2wait": 2, 00:07:33.453 "default_time2retain": 20, 00:07:33.453 "first_burst_length": 8192, 00:07:33.453 "immediate_data": true, 00:07:33.453 "allow_duplicated_isid": false, 00:07:33.453 "error_recovery_level": 0, 00:07:33.453 "nop_timeout": 60, 00:07:33.453 "nop_in_interval": 30, 00:07:33.453 "disable_chap": false, 00:07:33.453 "require_chap": false, 00:07:33.453 "mutual_chap": false, 00:07:33.453 "chap_group": 0, 00:07:33.453 "max_large_datain_per_connection": 64, 00:07:33.453 "max_r2t_per_connection": 4, 00:07:33.453 "pdu_pool_size": 36864, 00:07:33.453 "immediate_data_pool_size": 16384, 00:07:33.453 "data_out_pool_size": 2048 00:07:33.453 } 00:07:33.453 } 00:07:33.453 ] 00:07:33.453 } 00:07:33.453 ] 00:07:33.453 } 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57319 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57319 ']' 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57319 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57319 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.453 killing process with pid 57319 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57319' 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57319 00:07:33.453 07:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57319 00:07:35.988 07:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57375 00:07:35.988 07:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:35.988 07:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57375 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57375 ']' 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57375 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57375 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.264 killing process with pid 57375 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57375' 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57375 00:07:41.264 07:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57375 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:43.800 00:07:43.800 real 0m11.807s 00:07:43.800 user 0m11.269s 00:07:43.800 sys 0m0.879s 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 ************************************ 00:07:43.800 END TEST skip_rpc_with_json 00:07:43.800 ************************************ 00:07:43.800 07:04:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:43.800 07:04:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.800 07:04:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.800 07:04:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 ************************************ 00:07:43.800 START TEST skip_rpc_with_delay 00:07:43.800 ************************************ 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:43.800 [2024-11-20 07:04:25.900799] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.800 00:07:43.800 real 0m0.189s 00:07:43.800 user 0m0.104s 00:07:43.800 sys 0m0.079s 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.800 07:04:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 ************************************ 00:07:43.800 END TEST skip_rpc_with_delay 00:07:43.800 ************************************ 00:07:43.800 07:04:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:43.800 07:04:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:43.800 07:04:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:43.800 07:04:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.800 07:04:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.800 07:04:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 ************************************ 00:07:43.800 START TEST exit_on_failed_rpc_init 00:07:43.800 ************************************ 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57514 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57514 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57514 ']' 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.800 07:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:44.059 [2024-11-20 07:04:26.162773] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:44.059 [2024-11-20 07:04:26.162906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57514 ] 00:07:44.318 [2024-11-20 07:04:26.339094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.318 [2024-11-20 07:04:26.474376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:45.255 07:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:45.514 [2024-11-20 07:04:27.535986] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:45.514 [2024-11-20 07:04:27.536111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57532 ] 00:07:45.514 [2024-11-20 07:04:27.714619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.772 [2024-11-20 07:04:27.848411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.772 [2024-11-20 07:04:27.848506] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:45.772 [2024-11-20 07:04:27.848522] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:45.772 [2024-11-20 07:04:27.848536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57514 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57514 ']' 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57514 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57514 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.031 killing process with pid 57514 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57514' 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57514 00:07:46.031 07:04:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57514 00:07:49.329 00:07:49.329 real 0m4.813s 00:07:49.329 user 0m5.201s 00:07:49.329 sys 0m0.590s 00:07:49.329 07:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.329 07:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:49.329 ************************************ 00:07:49.329 END TEST exit_on_failed_rpc_init 00:07:49.329 ************************************ 00:07:49.329 07:04:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:49.329 00:07:49.329 real 0m25.145s 00:07:49.329 user 0m24.146s 00:07:49.329 sys 0m2.229s 00:07:49.329 07:04:30 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.329 07:04:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.329 ************************************ 00:07:49.329 END TEST skip_rpc 00:07:49.329 ************************************ 00:07:49.329 07:04:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:49.329 07:04:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.329 07:04:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.329 07:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:49.329 ************************************ 00:07:49.329 START TEST rpc_client 00:07:49.329 ************************************ 00:07:49.329 07:04:30 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:49.329 * Looking for test storage... 00:07:49.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:49.329 07:04:31 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.329 07:04:31 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.329 07:04:31 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.329 07:04:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.330 07:04:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:49.330 OK 00:07:49.330 07:04:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:49.330 00:07:49.330 real 0m0.295s 00:07:49.330 user 0m0.164s 00:07:49.330 sys 0m0.145s 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.330 07:04:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:49.330 ************************************ 00:07:49.330 END TEST rpc_client 00:07:49.330 ************************************ 00:07:49.330 07:04:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:49.330 07:04:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.330 07:04:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.330 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.330 ************************************ 00:07:49.330 START TEST json_config 00:07:49.330 ************************************ 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.330 07:04:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.330 07:04:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.330 07:04:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.330 07:04:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.330 07:04:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.330 07:04:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:49.330 07:04:31 json_config -- scripts/common.sh@345 -- # : 1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.330 07:04:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.330 07:04:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@353 -- # local d=1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.330 07:04:31 json_config -- scripts/common.sh@355 -- # echo 1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.330 07:04:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@353 -- # local d=2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.330 07:04:31 json_config -- scripts/common.sh@355 -- # echo 2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.330 07:04:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.330 07:04:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.330 07:04:31 json_config -- scripts/common.sh@368 -- # return 0 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.330 --rc genhtml_branch_coverage=1 00:07:49.330 --rc genhtml_function_coverage=1 00:07:49.330 --rc genhtml_legend=1 00:07:49.330 --rc geninfo_all_blocks=1 00:07:49.330 --rc geninfo_unexecuted_blocks=1 00:07:49.330 00:07:49.330 ' 00:07:49.330 07:04:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.330 07:04:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.331 07:04:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.331 07:04:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.331 07:04:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.331 07:04:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.331 07:04:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.331 07:04:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.331 07:04:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.331 07:04:31 json_config -- paths/export.sh@5 -- # export PATH 00:07:49.331 07:04:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@51 -- # : 0 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.331 07:04:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:49.331 WARNING: No tests are enabled so not running JSON configuration tests 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:49.331 07:04:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:49.331 00:07:49.331 real 0m0.238s 00:07:49.331 user 0m0.153s 00:07:49.331 sys 0m0.091s 00:07:49.331 07:04:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.331 07:04:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.331 ************************************ 00:07:49.331 END TEST json_config 00:07:49.331 ************************************ 00:07:49.591 07:04:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.591 07:04:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.591 07:04:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.591 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.591 ************************************ 00:07:49.591 START TEST json_config_extra_key 00:07:49.591 ************************************ 00:07:49.591 07:04:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.591 07:04:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.591 07:04:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.591 07:04:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.591 07:04:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.591 07:04:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:49.592 07:04:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.592 07:04:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 07:04:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 07:04:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 07:04:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.592 --rc genhtml_branch_coverage=1 00:07:49.592 --rc genhtml_function_coverage=1 00:07:49.592 --rc genhtml_legend=1 00:07:49.592 --rc geninfo_all_blocks=1 00:07:49.592 --rc geninfo_unexecuted_blocks=1 00:07:49.592 00:07:49.592 ' 00:07:49.592 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d81c61e6-bb83-4cf1-ac1d-576de88b2ab1 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.592 07:04:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.592 07:04:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.592 07:04:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.592 07:04:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.592 07:04:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:49.592 07:04:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.592 07:04:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.851 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.851 07:04:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:49.851 INFO: launching applications... 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:49.851 07:04:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57748 00:07:49.851 Waiting for target to run... 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.851 07:04:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57748 /var/tmp/spdk_tgt.sock 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57748 ']' 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.851 07:04:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:49.851 [2024-11-20 07:04:31.975701] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:49.851 [2024-11-20 07:04:31.975848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57748 ] 00:07:50.418 [2024-11-20 07:04:32.380403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.418 [2024-11-20 07:04:32.507508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.351 07:04:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.351 07:04:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:51.351 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:51.351 INFO: shutting down applications... 00:07:51.351 07:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:51.351 07:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57748 ]] 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57748 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:51.351 07:04:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:51.918 07:04:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:51.918 07:04:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:51.918 07:04:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:51.918 07:04:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:52.200 07:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:52.200 07:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:52.200 07:04:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:52.200 07:04:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:52.766 07:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:52.766 07:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:52.766 07:04:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:52.766 07:04:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:53.332 07:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:53.332 07:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:53.332 07:04:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:53.332 07:04:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:53.901 07:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:53.901 07:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:53.901 07:04:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:53.901 07:04:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:54.160 07:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:54.160 07:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:54.160 07:04:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:54.160 07:04:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:54.730 SPDK target shutdown done 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:54.730 07:04:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:54.730 Success 00:07:54.730 07:04:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:54.730 ************************************ 00:07:54.730 END TEST json_config_extra_key 00:07:54.730 ************************************ 00:07:54.730 00:07:54.730 real 0m5.278s 00:07:54.730 user 0m4.935s 00:07:54.730 sys 0m0.577s 00:07:54.730 07:04:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.730 07:04:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:54.730 07:04:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:54.730 07:04:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.730 07:04:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.730 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.730 ************************************ 00:07:54.730 START TEST alias_rpc 00:07:54.730 ************************************ 00:07:54.730 07:04:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:54.990 * Looking for test storage... 00:07:54.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.990 07:04:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.990 --rc genhtml_branch_coverage=1 00:07:54.990 --rc genhtml_function_coverage=1 00:07:54.990 --rc genhtml_legend=1 00:07:54.990 --rc geninfo_all_blocks=1 00:07:54.990 --rc geninfo_unexecuted_blocks=1 00:07:54.990 00:07:54.990 ' 00:07:54.990 07:04:37 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.990 --rc genhtml_branch_coverage=1 00:07:54.990 --rc genhtml_function_coverage=1 00:07:54.990 --rc genhtml_legend=1 00:07:54.990 --rc geninfo_all_blocks=1 00:07:54.990 --rc geninfo_unexecuted_blocks=1 00:07:54.991 00:07:54.991 ' 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.991 --rc genhtml_branch_coverage=1 00:07:54.991 --rc genhtml_function_coverage=1 00:07:54.991 --rc genhtml_legend=1 00:07:54.991 --rc geninfo_all_blocks=1 00:07:54.991 --rc geninfo_unexecuted_blocks=1 00:07:54.991 00:07:54.991 ' 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.991 --rc genhtml_branch_coverage=1 00:07:54.991 --rc genhtml_function_coverage=1 00:07:54.991 --rc genhtml_legend=1 00:07:54.991 --rc geninfo_all_blocks=1 00:07:54.991 --rc geninfo_unexecuted_blocks=1 00:07:54.991 00:07:54.991 ' 00:07:54.991 07:04:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.991 07:04:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57866 00:07:54.991 07:04:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.991 07:04:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57866 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57866 ']' 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.991 07:04:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.250 [2024-11-20 07:04:37.284962] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:55.250 [2024-11-20 07:04:37.285113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57866 ] 00:07:55.250 [2024-11-20 07:04:37.466820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.508 [2024-11-20 07:04:37.629255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.882 07:04:38 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.882 07:04:38 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:56.882 07:04:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:56.882 07:04:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57866 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57866 ']' 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57866 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57866 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57866' 00:07:56.882 killing process with pid 57866 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 57866 00:07:56.882 07:04:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 57866 00:08:00.166 ************************************ 00:08:00.166 END TEST alias_rpc 00:08:00.166 ************************************ 00:08:00.166 00:08:00.166 real 0m5.062s 00:08:00.166 user 0m4.940s 00:08:00.166 sys 0m0.774s 00:08:00.166 07:04:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.166 07:04:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.166 07:04:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:00.166 07:04:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:00.166 07:04:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.166 07:04:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.166 07:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:00.166 ************************************ 00:08:00.166 START TEST spdkcli_tcp 00:08:00.166 ************************************ 00:08:00.166 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:00.166 * Looking for test storage... 00:08:00.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:00.166 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.166 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.166 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.166 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:00.166 07:04:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.167 07:04:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.167 --rc genhtml_branch_coverage=1 00:08:00.167 --rc genhtml_function_coverage=1 00:08:00.167 --rc genhtml_legend=1 00:08:00.167 --rc geninfo_all_blocks=1 00:08:00.167 --rc geninfo_unexecuted_blocks=1 00:08:00.167 00:08:00.167 ' 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.167 --rc genhtml_branch_coverage=1 00:08:00.167 --rc genhtml_function_coverage=1 00:08:00.167 --rc genhtml_legend=1 00:08:00.167 --rc geninfo_all_blocks=1 00:08:00.167 --rc geninfo_unexecuted_blocks=1 00:08:00.167 00:08:00.167 ' 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.167 --rc genhtml_branch_coverage=1 00:08:00.167 --rc genhtml_function_coverage=1 00:08:00.167 --rc genhtml_legend=1 00:08:00.167 --rc geninfo_all_blocks=1 00:08:00.167 --rc geninfo_unexecuted_blocks=1 00:08:00.167 00:08:00.167 ' 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.167 --rc genhtml_branch_coverage=1 00:08:00.167 --rc genhtml_function_coverage=1 00:08:00.167 --rc genhtml_legend=1 00:08:00.167 --rc geninfo_all_blocks=1 00:08:00.167 --rc geninfo_unexecuted_blocks=1 00:08:00.167 00:08:00.167 ' 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57984 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:00.167 07:04:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57984 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57984 ']' 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.167 07:04:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.167 [2024-11-20 07:04:42.427867] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:00.167 [2024-11-20 07:04:42.428828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57984 ] 00:08:00.425 [2024-11-20 07:04:42.632805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.684 [2024-11-20 07:04:42.793623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.684 [2024-11-20 07:04:42.793667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.057 07:04:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.057 07:04:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:02.057 07:04:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58012 00:08:02.057 07:04:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:02.057 07:04:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:02.057 [ 00:08:02.057 "bdev_malloc_delete", 00:08:02.057 "bdev_malloc_create", 00:08:02.057 "bdev_null_resize", 00:08:02.057 "bdev_null_delete", 00:08:02.057 "bdev_null_create", 00:08:02.057 "bdev_nvme_cuse_unregister", 00:08:02.057 "bdev_nvme_cuse_register", 00:08:02.057 "bdev_opal_new_user", 00:08:02.057 "bdev_opal_set_lock_state", 00:08:02.057 "bdev_opal_delete", 00:08:02.057 "bdev_opal_get_info", 00:08:02.057 "bdev_opal_create", 00:08:02.057 "bdev_nvme_opal_revert", 00:08:02.057 "bdev_nvme_opal_init", 00:08:02.057 "bdev_nvme_send_cmd", 00:08:02.057 "bdev_nvme_set_keys", 00:08:02.057 "bdev_nvme_get_path_iostat", 00:08:02.057 "bdev_nvme_get_mdns_discovery_info", 00:08:02.057 "bdev_nvme_stop_mdns_discovery", 00:08:02.057 "bdev_nvme_start_mdns_discovery", 00:08:02.057 "bdev_nvme_set_multipath_policy", 00:08:02.057 "bdev_nvme_set_preferred_path", 00:08:02.057 "bdev_nvme_get_io_paths", 00:08:02.057 "bdev_nvme_remove_error_injection", 00:08:02.057 "bdev_nvme_add_error_injection", 00:08:02.057 "bdev_nvme_get_discovery_info", 00:08:02.057 "bdev_nvme_stop_discovery", 00:08:02.057 "bdev_nvme_start_discovery", 00:08:02.057 "bdev_nvme_get_controller_health_info", 00:08:02.057 "bdev_nvme_disable_controller", 00:08:02.057 "bdev_nvme_enable_controller", 00:08:02.057 "bdev_nvme_reset_controller", 00:08:02.057 "bdev_nvme_get_transport_statistics", 00:08:02.057 "bdev_nvme_apply_firmware", 00:08:02.057 "bdev_nvme_detach_controller", 00:08:02.057 "bdev_nvme_get_controllers", 00:08:02.057 "bdev_nvme_attach_controller", 00:08:02.057 "bdev_nvme_set_hotplug", 00:08:02.057 "bdev_nvme_set_options", 00:08:02.057 "bdev_passthru_delete", 00:08:02.057 "bdev_passthru_create", 00:08:02.057 "bdev_lvol_set_parent_bdev", 00:08:02.057 "bdev_lvol_set_parent", 00:08:02.057 "bdev_lvol_check_shallow_copy", 00:08:02.057 "bdev_lvol_start_shallow_copy", 00:08:02.057 "bdev_lvol_grow_lvstore", 00:08:02.057 "bdev_lvol_get_lvols", 00:08:02.057 "bdev_lvol_get_lvstores", 00:08:02.057 "bdev_lvol_delete", 00:08:02.057 "bdev_lvol_set_read_only", 00:08:02.057 "bdev_lvol_resize", 00:08:02.057 "bdev_lvol_decouple_parent", 00:08:02.057 "bdev_lvol_inflate", 00:08:02.057 "bdev_lvol_rename", 00:08:02.057 "bdev_lvol_clone_bdev", 00:08:02.057 "bdev_lvol_clone", 00:08:02.057 "bdev_lvol_snapshot", 00:08:02.057 "bdev_lvol_create", 00:08:02.057 "bdev_lvol_delete_lvstore", 00:08:02.057 "bdev_lvol_rename_lvstore", 00:08:02.057 "bdev_lvol_create_lvstore", 00:08:02.057 "bdev_raid_set_options", 00:08:02.057 "bdev_raid_remove_base_bdev", 00:08:02.057 "bdev_raid_add_base_bdev", 00:08:02.057 "bdev_raid_delete", 00:08:02.057 "bdev_raid_create", 00:08:02.057 "bdev_raid_get_bdevs", 00:08:02.057 "bdev_error_inject_error", 00:08:02.057 "bdev_error_delete", 00:08:02.057 "bdev_error_create", 00:08:02.057 "bdev_split_delete", 00:08:02.057 "bdev_split_create", 00:08:02.057 "bdev_delay_delete", 00:08:02.057 "bdev_delay_create", 00:08:02.057 "bdev_delay_update_latency", 00:08:02.057 "bdev_zone_block_delete", 00:08:02.057 "bdev_zone_block_create", 00:08:02.057 "blobfs_create", 00:08:02.057 "blobfs_detect", 00:08:02.057 "blobfs_set_cache_size", 00:08:02.057 "bdev_aio_delete", 00:08:02.057 "bdev_aio_rescan", 00:08:02.057 "bdev_aio_create", 00:08:02.057 "bdev_ftl_set_property", 00:08:02.057 "bdev_ftl_get_properties", 00:08:02.057 "bdev_ftl_get_stats", 00:08:02.057 "bdev_ftl_unmap", 00:08:02.057 "bdev_ftl_unload", 00:08:02.057 "bdev_ftl_delete", 00:08:02.057 "bdev_ftl_load", 00:08:02.057 "bdev_ftl_create", 00:08:02.057 "bdev_virtio_attach_controller", 00:08:02.057 "bdev_virtio_scsi_get_devices", 00:08:02.057 "bdev_virtio_detach_controller", 00:08:02.057 "bdev_virtio_blk_set_hotplug", 00:08:02.057 "bdev_iscsi_delete", 00:08:02.057 "bdev_iscsi_create", 00:08:02.057 "bdev_iscsi_set_options", 00:08:02.057 "accel_error_inject_error", 00:08:02.057 "ioat_scan_accel_module", 00:08:02.057 "dsa_scan_accel_module", 00:08:02.057 "iaa_scan_accel_module", 00:08:02.057 "keyring_file_remove_key", 00:08:02.057 "keyring_file_add_key", 00:08:02.057 "keyring_linux_set_options", 00:08:02.057 "fsdev_aio_delete", 00:08:02.057 "fsdev_aio_create", 00:08:02.057 "iscsi_get_histogram", 00:08:02.057 "iscsi_enable_histogram", 00:08:02.057 "iscsi_set_options", 00:08:02.057 "iscsi_get_auth_groups", 00:08:02.057 "iscsi_auth_group_remove_secret", 00:08:02.057 "iscsi_auth_group_add_secret", 00:08:02.057 "iscsi_delete_auth_group", 00:08:02.057 "iscsi_create_auth_group", 00:08:02.057 "iscsi_set_discovery_auth", 00:08:02.057 "iscsi_get_options", 00:08:02.057 "iscsi_target_node_request_logout", 00:08:02.057 "iscsi_target_node_set_redirect", 00:08:02.057 "iscsi_target_node_set_auth", 00:08:02.057 "iscsi_target_node_add_lun", 00:08:02.057 "iscsi_get_stats", 00:08:02.057 "iscsi_get_connections", 00:08:02.057 "iscsi_portal_group_set_auth", 00:08:02.057 "iscsi_start_portal_group", 00:08:02.057 "iscsi_delete_portal_group", 00:08:02.057 "iscsi_create_portal_group", 00:08:02.057 "iscsi_get_portal_groups", 00:08:02.057 "iscsi_delete_target_node", 00:08:02.057 "iscsi_target_node_remove_pg_ig_maps", 00:08:02.057 "iscsi_target_node_add_pg_ig_maps", 00:08:02.057 "iscsi_create_target_node", 00:08:02.057 "iscsi_get_target_nodes", 00:08:02.057 "iscsi_delete_initiator_group", 00:08:02.057 "iscsi_initiator_group_remove_initiators", 00:08:02.057 "iscsi_initiator_group_add_initiators", 00:08:02.057 "iscsi_create_initiator_group", 00:08:02.057 "iscsi_get_initiator_groups", 00:08:02.057 "nvmf_set_crdt", 00:08:02.057 "nvmf_set_config", 00:08:02.057 "nvmf_set_max_subsystems", 00:08:02.057 "nvmf_stop_mdns_prr", 00:08:02.057 "nvmf_publish_mdns_prr", 00:08:02.057 "nvmf_subsystem_get_listeners", 00:08:02.057 "nvmf_subsystem_get_qpairs", 00:08:02.057 "nvmf_subsystem_get_controllers", 00:08:02.057 "nvmf_get_stats", 00:08:02.057 "nvmf_get_transports", 00:08:02.057 "nvmf_create_transport", 00:08:02.057 "nvmf_get_targets", 00:08:02.057 "nvmf_delete_target", 00:08:02.057 "nvmf_create_target", 00:08:02.057 "nvmf_subsystem_allow_any_host", 00:08:02.057 "nvmf_subsystem_set_keys", 00:08:02.057 "nvmf_subsystem_remove_host", 00:08:02.057 "nvmf_subsystem_add_host", 00:08:02.057 "nvmf_ns_remove_host", 00:08:02.057 "nvmf_ns_add_host", 00:08:02.057 "nvmf_subsystem_remove_ns", 00:08:02.057 "nvmf_subsystem_set_ns_ana_group", 00:08:02.057 "nvmf_subsystem_add_ns", 00:08:02.057 "nvmf_subsystem_listener_set_ana_state", 00:08:02.057 "nvmf_discovery_get_referrals", 00:08:02.057 "nvmf_discovery_remove_referral", 00:08:02.057 "nvmf_discovery_add_referral", 00:08:02.057 "nvmf_subsystem_remove_listener", 00:08:02.057 "nvmf_subsystem_add_listener", 00:08:02.057 "nvmf_delete_subsystem", 00:08:02.057 "nvmf_create_subsystem", 00:08:02.057 "nvmf_get_subsystems", 00:08:02.057 "env_dpdk_get_mem_stats", 00:08:02.057 "nbd_get_disks", 00:08:02.058 "nbd_stop_disk", 00:08:02.058 "nbd_start_disk", 00:08:02.058 "ublk_recover_disk", 00:08:02.058 "ublk_get_disks", 00:08:02.058 "ublk_stop_disk", 00:08:02.058 "ublk_start_disk", 00:08:02.058 "ublk_destroy_target", 00:08:02.058 "ublk_create_target", 00:08:02.058 "virtio_blk_create_transport", 00:08:02.058 "virtio_blk_get_transports", 00:08:02.058 "vhost_controller_set_coalescing", 00:08:02.058 "vhost_get_controllers", 00:08:02.058 "vhost_delete_controller", 00:08:02.058 "vhost_create_blk_controller", 00:08:02.058 "vhost_scsi_controller_remove_target", 00:08:02.058 "vhost_scsi_controller_add_target", 00:08:02.058 "vhost_start_scsi_controller", 00:08:02.058 "vhost_create_scsi_controller", 00:08:02.058 "thread_set_cpumask", 00:08:02.058 "scheduler_set_options", 00:08:02.058 "framework_get_governor", 00:08:02.058 "framework_get_scheduler", 00:08:02.058 "framework_set_scheduler", 00:08:02.058 "framework_get_reactors", 00:08:02.058 "thread_get_io_channels", 00:08:02.058 "thread_get_pollers", 00:08:02.058 "thread_get_stats", 00:08:02.058 "framework_monitor_context_switch", 00:08:02.058 "spdk_kill_instance", 00:08:02.058 "log_enable_timestamps", 00:08:02.058 "log_get_flags", 00:08:02.058 "log_clear_flag", 00:08:02.058 "log_set_flag", 00:08:02.058 "log_get_level", 00:08:02.058 "log_set_level", 00:08:02.058 "log_get_print_level", 00:08:02.058 "log_set_print_level", 00:08:02.058 "framework_enable_cpumask_locks", 00:08:02.058 "framework_disable_cpumask_locks", 00:08:02.058 "framework_wait_init", 00:08:02.058 "framework_start_init", 00:08:02.058 "scsi_get_devices", 00:08:02.058 "bdev_get_histogram", 00:08:02.058 "bdev_enable_histogram", 00:08:02.058 "bdev_set_qos_limit", 00:08:02.058 "bdev_set_qd_sampling_period", 00:08:02.058 "bdev_get_bdevs", 00:08:02.058 "bdev_reset_iostat", 00:08:02.058 "bdev_get_iostat", 00:08:02.058 "bdev_examine", 00:08:02.058 "bdev_wait_for_examine", 00:08:02.058 "bdev_set_options", 00:08:02.058 "accel_get_stats", 00:08:02.058 "accel_set_options", 00:08:02.058 "accel_set_driver", 00:08:02.058 "accel_crypto_key_destroy", 00:08:02.058 "accel_crypto_keys_get", 00:08:02.058 "accel_crypto_key_create", 00:08:02.058 "accel_assign_opc", 00:08:02.058 "accel_get_module_info", 00:08:02.058 "accel_get_opc_assignments", 00:08:02.058 "vmd_rescan", 00:08:02.058 "vmd_remove_device", 00:08:02.058 "vmd_enable", 00:08:02.058 "sock_get_default_impl", 00:08:02.058 "sock_set_default_impl", 00:08:02.058 "sock_impl_set_options", 00:08:02.058 "sock_impl_get_options", 00:08:02.058 "iobuf_get_stats", 00:08:02.058 "iobuf_set_options", 00:08:02.058 "keyring_get_keys", 00:08:02.058 "framework_get_pci_devices", 00:08:02.058 "framework_get_config", 00:08:02.058 "framework_get_subsystems", 00:08:02.058 "fsdev_set_opts", 00:08:02.058 "fsdev_get_opts", 00:08:02.058 "trace_get_info", 00:08:02.058 "trace_get_tpoint_group_mask", 00:08:02.058 "trace_disable_tpoint_group", 00:08:02.058 "trace_enable_tpoint_group", 00:08:02.058 "trace_clear_tpoint_mask", 00:08:02.058 "trace_set_tpoint_mask", 00:08:02.058 "notify_get_notifications", 00:08:02.058 "notify_get_types", 00:08:02.058 "spdk_get_version", 00:08:02.058 "rpc_get_methods" 00:08:02.058 ] 00:08:02.315 07:04:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.315 07:04:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:02.315 07:04:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57984 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57984 ']' 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57984 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57984 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57984' 00:08:02.315 killing process with pid 57984 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57984 00:08:02.315 07:04:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57984 00:08:05.598 00:08:05.598 real 0m5.424s 00:08:05.598 user 0m9.775s 00:08:05.598 sys 0m0.896s 00:08:05.598 ************************************ 00:08:05.598 END TEST spdkcli_tcp 00:08:05.598 ************************************ 00:08:05.598 07:04:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.598 07:04:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.598 07:04:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.598 07:04:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.598 07:04:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.598 07:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.598 ************************************ 00:08:05.598 START TEST dpdk_mem_utility 00:08:05.598 ************************************ 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.598 * Looking for test storage... 00:08:05.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:05.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.598 07:04:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.598 --rc genhtml_branch_coverage=1 00:08:05.598 --rc genhtml_function_coverage=1 00:08:05.598 --rc genhtml_legend=1 00:08:05.598 --rc geninfo_all_blocks=1 00:08:05.598 --rc geninfo_unexecuted_blocks=1 00:08:05.598 00:08:05.598 ' 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.598 --rc genhtml_branch_coverage=1 00:08:05.598 --rc genhtml_function_coverage=1 00:08:05.598 --rc genhtml_legend=1 00:08:05.598 --rc geninfo_all_blocks=1 00:08:05.598 --rc geninfo_unexecuted_blocks=1 00:08:05.598 00:08:05.598 ' 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.598 --rc genhtml_branch_coverage=1 00:08:05.598 --rc genhtml_function_coverage=1 00:08:05.598 --rc genhtml_legend=1 00:08:05.598 --rc geninfo_all_blocks=1 00:08:05.598 --rc geninfo_unexecuted_blocks=1 00:08:05.598 00:08:05.598 ' 00:08:05.598 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.598 --rc genhtml_branch_coverage=1 00:08:05.599 --rc genhtml_function_coverage=1 00:08:05.599 --rc genhtml_legend=1 00:08:05.599 --rc geninfo_all_blocks=1 00:08:05.599 --rc geninfo_unexecuted_blocks=1 00:08:05.599 00:08:05.599 ' 00:08:05.599 07:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:05.599 07:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58121 00:08:05.599 07:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58121 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58121 ']' 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.599 07:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.599 07:04:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:05.599 [2024-11-20 07:04:47.857811] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:05.599 [2024-11-20 07:04:47.858069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58121 ] 00:08:05.857 [2024-11-20 07:04:48.040650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.116 [2024-11-20 07:04:48.213269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.501 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.501 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:07.501 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:07.501 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:07.501 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.501 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:07.501 { 00:08:07.501 "filename": "/tmp/spdk_mem_dump.txt" 00:08:07.501 } 00:08:07.501 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.501 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:07.501 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:07.501 1 heaps totaling size 816.000000 MiB 00:08:07.501 size: 816.000000 MiB heap id: 0 00:08:07.501 end heaps---------- 00:08:07.501 9 mempools totaling size 595.772034 MiB 00:08:07.501 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:07.501 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:07.501 size: 92.545471 MiB name: bdev_io_58121 00:08:07.501 size: 50.003479 MiB name: msgpool_58121 00:08:07.501 size: 36.509338 MiB name: fsdev_io_58121 00:08:07.501 size: 21.763794 MiB name: PDU_Pool 00:08:07.501 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:07.501 size: 4.133484 MiB name: evtpool_58121 00:08:07.501 size: 0.026123 MiB name: Session_Pool 00:08:07.501 end mempools------- 00:08:07.501 6 memzones totaling size 4.142822 MiB 00:08:07.501 size: 1.000366 MiB name: RG_ring_0_58121 00:08:07.501 size: 1.000366 MiB name: RG_ring_1_58121 00:08:07.501 size: 1.000366 MiB name: RG_ring_4_58121 00:08:07.501 size: 1.000366 MiB name: RG_ring_5_58121 00:08:07.501 size: 0.125366 MiB name: RG_ring_2_58121 00:08:07.501 size: 0.015991 MiB name: RG_ring_3_58121 00:08:07.501 end memzones------- 00:08:07.501 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:07.501 heap id: 0 total size: 816.000000 MiB number of busy elements: 321 number of free elements: 18 00:08:07.501 list of free elements. size: 16.789917 MiB 00:08:07.501 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:07.501 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:07.501 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:07.501 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:07.501 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:07.501 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:07.501 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:07.501 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:07.501 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:07.501 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:07.501 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:07.501 element at address: 0x20001ac00000 with size: 0.560242 MiB 00:08:07.501 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:07.501 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:07.501 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:07.501 element at address: 0x200012c00000 with size: 0.443481 MiB 00:08:07.501 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:07.501 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:07.501 list of standard malloc elements. size: 199.289185 MiB 00:08:07.501 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:07.501 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:07.501 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:07.501 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:07.501 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:07.501 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:07.501 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:07.501 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:07.501 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:07.501 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:07.501 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:07.501 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:07.501 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:07.501 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:07.502 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:07.502 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:07.502 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:07.502 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:07.503 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:07.503 list of memzone associated elements. size: 599.920898 MiB 00:08:07.503 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:07.503 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:07.503 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:07.503 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:07.503 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:07.503 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58121_0 00:08:07.503 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:07.503 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58121_0 00:08:07.503 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:07.503 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58121_0 00:08:07.503 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:07.503 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:07.503 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:07.503 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:07.503 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:07.503 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58121_0 00:08:07.503 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:07.503 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58121 00:08:07.503 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:07.503 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58121 00:08:07.503 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:07.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:07.503 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:07.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:07.503 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:07.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:07.503 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:07.503 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:07.503 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:07.503 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58121 00:08:07.503 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:07.503 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58121 00:08:07.503 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:07.503 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58121 00:08:07.503 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:07.503 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58121 00:08:07.503 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:07.503 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58121 00:08:07.503 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:07.503 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58121 00:08:07.503 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:07.503 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:07.503 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:07.503 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:07.503 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:07.503 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:07.503 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:07.503 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58121 00:08:07.503 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:07.503 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58121 00:08:07.503 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:07.503 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:07.503 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:07.503 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:07.503 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:07.503 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58121 00:08:07.503 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:07.503 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:07.503 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:07.503 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58121 00:08:07.503 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:07.503 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58121 00:08:07.503 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:07.503 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58121 00:08:07.503 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:07.503 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:07.503 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:07.503 07:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58121 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58121 ']' 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58121 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58121 00:08:07.503 killing process with pid 58121 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58121' 00:08:07.503 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58121 00:08:07.504 07:04:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58121 00:08:10.789 00:08:10.789 real 0m5.139s 00:08:10.789 user 0m4.979s 00:08:10.789 sys 0m0.754s 00:08:10.789 07:04:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.789 ************************************ 00:08:10.789 END TEST dpdk_mem_utility 00:08:10.789 ************************************ 00:08:10.789 07:04:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 07:04:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:10.789 07:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.789 07:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.789 07:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 ************************************ 00:08:10.789 START TEST event 00:08:10.789 ************************************ 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:10.789 * Looking for test storage... 00:08:10.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.789 07:04:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.789 07:04:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.789 07:04:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.789 07:04:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.789 07:04:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.789 07:04:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.789 07:04:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.789 07:04:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.789 07:04:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.789 07:04:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.789 07:04:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.789 07:04:52 event -- scripts/common.sh@344 -- # case "$op" in 00:08:10.789 07:04:52 event -- scripts/common.sh@345 -- # : 1 00:08:10.789 07:04:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.789 07:04:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.789 07:04:52 event -- scripts/common.sh@365 -- # decimal 1 00:08:10.789 07:04:52 event -- scripts/common.sh@353 -- # local d=1 00:08:10.789 07:04:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.789 07:04:52 event -- scripts/common.sh@355 -- # echo 1 00:08:10.789 07:04:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.789 07:04:52 event -- scripts/common.sh@366 -- # decimal 2 00:08:10.789 07:04:52 event -- scripts/common.sh@353 -- # local d=2 00:08:10.789 07:04:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.789 07:04:52 event -- scripts/common.sh@355 -- # echo 2 00:08:10.789 07:04:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.789 07:04:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.789 07:04:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.789 07:04:52 event -- scripts/common.sh@368 -- # return 0 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.789 07:04:52 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.789 --rc genhtml_branch_coverage=1 00:08:10.790 --rc genhtml_function_coverage=1 00:08:10.790 --rc genhtml_legend=1 00:08:10.790 --rc geninfo_all_blocks=1 00:08:10.790 --rc geninfo_unexecuted_blocks=1 00:08:10.790 00:08:10.790 ' 00:08:10.790 07:04:52 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.790 --rc genhtml_branch_coverage=1 00:08:10.790 --rc genhtml_function_coverage=1 00:08:10.790 --rc genhtml_legend=1 00:08:10.790 --rc geninfo_all_blocks=1 00:08:10.790 --rc geninfo_unexecuted_blocks=1 00:08:10.790 00:08:10.790 ' 00:08:10.790 07:04:52 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.790 --rc genhtml_branch_coverage=1 00:08:10.790 --rc genhtml_function_coverage=1 00:08:10.790 --rc genhtml_legend=1 00:08:10.790 --rc geninfo_all_blocks=1 00:08:10.790 --rc geninfo_unexecuted_blocks=1 00:08:10.790 00:08:10.790 ' 00:08:10.790 07:04:52 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.790 --rc genhtml_branch_coverage=1 00:08:10.790 --rc genhtml_function_coverage=1 00:08:10.790 --rc genhtml_legend=1 00:08:10.790 --rc geninfo_all_blocks=1 00:08:10.790 --rc geninfo_unexecuted_blocks=1 00:08:10.790 00:08:10.790 ' 00:08:10.790 07:04:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:10.790 07:04:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.790 07:04:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.790 07:04:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:10.790 07:04:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.790 07:04:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 ************************************ 00:08:10.790 START TEST event_perf 00:08:10.790 ************************************ 00:08:10.790 07:04:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.790 Running I/O for 1 seconds...[2024-11-20 07:04:52.987892] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:10.790 [2024-11-20 07:04:52.988750] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58236 ] 00:08:11.048 [2024-11-20 07:04:53.190492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.306 [2024-11-20 07:04:53.371938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.306 [2024-11-20 07:04:53.372222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.307 [2024-11-20 07:04:53.372282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.307 [2024-11-20 07:04:53.372300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.681 Running I/O for 1 seconds... 00:08:12.681 lcore 0: 72130 00:08:12.681 lcore 1: 72133 00:08:12.681 lcore 2: 72136 00:08:12.681 lcore 3: 72133 00:08:12.681 done. 00:08:12.681 00:08:12.681 real 0m1.744s 00:08:12.681 user 0m4.452s 00:08:12.681 sys 0m0.153s 00:08:12.681 07:04:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.681 07:04:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:12.681 ************************************ 00:08:12.681 END TEST event_perf 00:08:12.681 ************************************ 00:08:12.681 07:04:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:12.681 07:04:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:12.681 07:04:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.681 07:04:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:12.681 ************************************ 00:08:12.681 START TEST event_reactor 00:08:12.681 ************************************ 00:08:12.681 07:04:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:12.681 [2024-11-20 07:04:54.791531] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:12.681 [2024-11-20 07:04:54.791726] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:08:12.938 [2024-11-20 07:04:54.989418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.938 [2024-11-20 07:04:55.161795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.384 test_start 00:08:14.384 oneshot 00:08:14.384 tick 100 00:08:14.384 tick 100 00:08:14.384 tick 250 00:08:14.384 tick 100 00:08:14.384 tick 100 00:08:14.384 tick 100 00:08:14.384 tick 250 00:08:14.384 tick 500 00:08:14.384 tick 100 00:08:14.384 tick 100 00:08:14.384 tick 250 00:08:14.384 tick 100 00:08:14.384 tick 100 00:08:14.384 test_end 00:08:14.384 00:08:14.384 real 0m1.715s 00:08:14.384 user 0m1.467s 00:08:14.384 sys 0m0.134s 00:08:14.384 ************************************ 00:08:14.384 END TEST event_reactor 00:08:14.384 ************************************ 00:08:14.384 07:04:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.384 07:04:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:14.384 07:04:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:14.384 07:04:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:14.384 07:04:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.384 07:04:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:14.384 ************************************ 00:08:14.384 START TEST event_reactor_perf 00:08:14.384 ************************************ 00:08:14.384 07:04:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:14.384 [2024-11-20 07:04:56.557175] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:14.384 [2024-11-20 07:04:56.557586] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58323 ] 00:08:14.642 [2024-11-20 07:04:56.765155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.900 [2024-11-20 07:04:56.932419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.273 test_start 00:08:16.273 test_end 00:08:16.273 Performance: 302235 events per second 00:08:16.273 00:08:16.273 real 0m1.714s 00:08:16.273 user 0m1.482s 00:08:16.273 sys 0m0.118s 00:08:16.273 07:04:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.273 07:04:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.273 ************************************ 00:08:16.273 END TEST event_reactor_perf 00:08:16.273 ************************************ 00:08:16.273 07:04:58 event -- event/event.sh@49 -- # uname -s 00:08:16.273 07:04:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:16.273 07:04:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:16.273 07:04:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.273 07:04:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.273 07:04:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.273 ************************************ 00:08:16.273 START TEST event_scheduler 00:08:16.273 ************************************ 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:16.273 * Looking for test storage... 00:08:16.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.273 07:04:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.273 --rc genhtml_branch_coverage=1 00:08:16.273 --rc genhtml_function_coverage=1 00:08:16.273 --rc genhtml_legend=1 00:08:16.273 --rc geninfo_all_blocks=1 00:08:16.273 --rc geninfo_unexecuted_blocks=1 00:08:16.273 00:08:16.273 ' 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.273 --rc genhtml_branch_coverage=1 00:08:16.273 --rc genhtml_function_coverage=1 00:08:16.273 --rc genhtml_legend=1 00:08:16.273 --rc geninfo_all_blocks=1 00:08:16.273 --rc geninfo_unexecuted_blocks=1 00:08:16.273 00:08:16.273 ' 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.273 --rc genhtml_branch_coverage=1 00:08:16.273 --rc genhtml_function_coverage=1 00:08:16.273 --rc genhtml_legend=1 00:08:16.273 --rc geninfo_all_blocks=1 00:08:16.273 --rc geninfo_unexecuted_blocks=1 00:08:16.273 00:08:16.273 ' 00:08:16.273 07:04:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.273 --rc genhtml_branch_coverage=1 00:08:16.273 --rc genhtml_function_coverage=1 00:08:16.273 --rc genhtml_legend=1 00:08:16.273 --rc geninfo_all_blocks=1 00:08:16.273 --rc geninfo_unexecuted_blocks=1 00:08:16.273 00:08:16.273 ' 00:08:16.273 07:04:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:16.273 07:04:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58399 00:08:16.274 07:04:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:16.274 07:04:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.274 07:04:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58399 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58399 ']' 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.274 07:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:16.532 [2024-11-20 07:04:58.585812] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:16.532 [2024-11-20 07:04:58.586175] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:08:16.532 [2024-11-20 07:04:58.776198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.790 [2024-11-20 07:04:58.960422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.790 [2024-11-20 07:04:58.960555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.790 [2024-11-20 07:04:58.960560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.790 [2024-11-20 07:04:58.960484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:17.725 07:04:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:17.725 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.725 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.725 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.725 POWER: Cannot set governor of lcore 0 to performance 00:08:17.725 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.725 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.725 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.725 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.725 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:17.725 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:17.725 POWER: Unable to set Power Management Environment for lcore 0 00:08:17.725 [2024-11-20 07:04:59.710764] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:17.725 [2024-11-20 07:04:59.710829] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:17.725 [2024-11-20 07:04:59.710871] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:17.725 [2024-11-20 07:04:59.710929] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:17.725 [2024-11-20 07:04:59.710964] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:17.725 [2024-11-20 07:04:59.711006] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.725 07:04:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.725 07:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 [2024-11-20 07:05:00.106257] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:17.984 07:05:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:17.984 07:05:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.984 07:05:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 ************************************ 00:08:17.984 START TEST scheduler_create_thread 00:08:17.984 ************************************ 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 2 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 3 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 4 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 5 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 6 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 7 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 8 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 9 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.984 10 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.918 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.918 07:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:18.918 07:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:18.918 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.918 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.851 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.851 07:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:19.851 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.851 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.787 07:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.787 07:05:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:20.787 07:05:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:20.787 07:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.787 07:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.722 07:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.722 ************************************ 00:08:21.722 END TEST scheduler_create_thread 00:08:21.722 ************************************ 00:08:21.722 00:08:21.722 real 0m3.552s 00:08:21.722 user 0m0.026s 00:08:21.722 sys 0m0.012s 00:08:21.722 07:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.722 07:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.722 07:05:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:21.722 07:05:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58399 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58399 ']' 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58399 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58399 00:08:21.722 killing process with pid 58399 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58399' 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58399 00:08:21.722 07:05:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58399 00:08:21.982 [2024-11-20 07:05:04.051354] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:23.395 00:08:23.395 real 0m7.194s 00:08:23.395 user 0m14.110s 00:08:23.395 sys 0m0.568s 00:08:23.395 ************************************ 00:08:23.395 07:05:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.395 07:05:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.395 END TEST event_scheduler 00:08:23.395 ************************************ 00:08:23.395 07:05:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:23.395 07:05:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:23.395 07:05:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.395 07:05:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.395 07:05:05 event -- common/autotest_common.sh@10 -- # set +x 00:08:23.395 ************************************ 00:08:23.395 START TEST app_repeat 00:08:23.395 ************************************ 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58526 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:23.395 Process app_repeat pid: 58526 00:08:23.395 spdk_app_start Round 0 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58526' 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:23.395 07:05:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58526 /var/tmp/spdk-nbd.sock 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.395 07:05:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:23.395 [2024-11-20 07:05:05.626812] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:23.395 [2024-11-20 07:05:05.626960] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:08:23.692 [2024-11-20 07:05:05.808730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.949 [2024-11-20 07:05:05.973085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.949 [2024-11-20 07:05:05.973127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.515 07:05:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.515 07:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:24.515 07:05:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.774 Malloc0 00:08:24.774 07:05:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:25.033 Malloc1 00:08:25.292 07:05:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.292 07:05:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:25.292 /dev/nbd0 00:08:25.552 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.552 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.552 1+0 records in 00:08:25.552 1+0 records out 00:08:25.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043901 s, 9.3 MB/s 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.552 07:05:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:25.552 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.552 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.552 07:05:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.811 /dev/nbd1 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.811 1+0 records in 00:08:25.811 1+0 records out 00:08:25.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472699 s, 8.7 MB/s 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.811 07:05:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.811 07:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.069 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:26.069 { 00:08:26.069 "nbd_device": "/dev/nbd0", 00:08:26.069 "bdev_name": "Malloc0" 00:08:26.069 }, 00:08:26.069 { 00:08:26.069 "nbd_device": "/dev/nbd1", 00:08:26.069 "bdev_name": "Malloc1" 00:08:26.069 } 00:08:26.069 ]' 00:08:26.069 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:26.069 { 00:08:26.069 "nbd_device": "/dev/nbd0", 00:08:26.069 "bdev_name": "Malloc0" 00:08:26.069 }, 00:08:26.069 { 00:08:26.069 "nbd_device": "/dev/nbd1", 00:08:26.070 "bdev_name": "Malloc1" 00:08:26.070 } 00:08:26.070 ]' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:26.070 /dev/nbd1' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:26.070 /dev/nbd1' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:26.070 256+0 records in 00:08:26.070 256+0 records out 00:08:26.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131916 s, 79.5 MB/s 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:26.070 256+0 records in 00:08:26.070 256+0 records out 00:08:26.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273163 s, 38.4 MB/s 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:26.070 256+0 records in 00:08:26.070 256+0 records out 00:08:26.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305029 s, 34.4 MB/s 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.070 07:05:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:26.328 07:05:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:26.328 07:05:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:26.328 07:05:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.328 07:05:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.329 07:05:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.329 07:05:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:26.329 07:05:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.329 07:05:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.587 07:05:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.846 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:27.162 07:05:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:27.162 07:05:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.729 07:05:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:29.101 [2024-11-20 07:05:11.027803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:29.101 [2024-11-20 07:05:11.158735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.101 [2024-11-20 07:05:11.158741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.360 [2024-11-20 07:05:11.384602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:29.360 [2024-11-20 07:05:11.384700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.732 07:05:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:30.732 spdk_app_start Round 1 00:08:30.732 07:05:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:30.732 07:05:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58526 /var/tmp/spdk-nbd.sock 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.732 07:05:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:30.732 07:05:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.296 Malloc0 00:08:31.296 07:05:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.554 Malloc1 00:08:31.554 07:05:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.554 07:05:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.811 /dev/nbd0 00:08:31.811 07:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.811 07:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.811 07:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.811 1+0 records in 00:08:31.811 1+0 records out 00:08:31.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363306 s, 11.3 MB/s 00:08:31.812 07:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.812 07:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:31.812 07:05:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.812 07:05:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.812 07:05:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:31.812 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.812 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.812 07:05:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:32.077 /dev/nbd1 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:32.077 1+0 records in 00:08:32.077 1+0 records out 00:08:32.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028003 s, 14.6 MB/s 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:32.077 07:05:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.077 07:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.335 07:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:32.335 { 00:08:32.335 "nbd_device": "/dev/nbd0", 00:08:32.335 "bdev_name": "Malloc0" 00:08:32.335 }, 00:08:32.335 { 00:08:32.336 "nbd_device": "/dev/nbd1", 00:08:32.336 "bdev_name": "Malloc1" 00:08:32.336 } 00:08:32.336 ]' 00:08:32.336 07:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:32.336 { 00:08:32.336 "nbd_device": "/dev/nbd0", 00:08:32.336 "bdev_name": "Malloc0" 00:08:32.336 }, 00:08:32.336 { 00:08:32.336 "nbd_device": "/dev/nbd1", 00:08:32.336 "bdev_name": "Malloc1" 00:08:32.336 } 00:08:32.336 ]' 00:08:32.336 07:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.594 07:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:32.594 /dev/nbd1' 00:08:32.594 07:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:32.594 /dev/nbd1' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:32.595 256+0 records in 00:08:32.595 256+0 records out 00:08:32.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013763 s, 76.2 MB/s 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:32.595 256+0 records in 00:08:32.595 256+0 records out 00:08:32.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265447 s, 39.5 MB/s 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:32.595 256+0 records in 00:08:32.595 256+0 records out 00:08:32.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290721 s, 36.1 MB/s 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.595 07:05:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.853 07:05:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.111 07:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:33.368 07:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:33.369 07:05:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:33.369 07:05:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:33.933 07:05:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:35.309 [2024-11-20 07:05:17.375983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.309 [2024-11-20 07:05:17.507269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.309 [2024-11-20 07:05:17.507290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.568 [2024-11-20 07:05:17.730860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:35.568 [2024-11-20 07:05:17.730970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.946 spdk_app_start Round 2 00:08:36.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.946 07:05:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:36.946 07:05:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:36.946 07:05:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58526 /var/tmp/spdk-nbd.sock 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.946 07:05:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:37.206 07:05:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.206 07:05:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:37.206 07:05:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.466 Malloc0 00:08:37.466 07:05:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.725 Malloc1 00:08:37.985 07:05:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.985 07:05:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.985 /dev/nbd0 00:08:38.245 07:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:38.245 07:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.245 1+0 records in 00:08:38.245 1+0 records out 00:08:38.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504155 s, 8.1 MB/s 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.245 07:05:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:38.245 07:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.245 07:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.245 07:05:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:38.505 /dev/nbd1 00:08:38.505 07:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.505 07:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.505 07:05:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:38.505 07:05:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:38.505 07:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.506 1+0 records in 00:08:38.506 1+0 records out 00:08:38.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456405 s, 9.0 MB/s 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.506 07:05:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:38.506 07:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.506 07:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.506 07:05:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.506 07:05:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.506 07:05:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.766 { 00:08:38.766 "nbd_device": "/dev/nbd0", 00:08:38.766 "bdev_name": "Malloc0" 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "nbd_device": "/dev/nbd1", 00:08:38.766 "bdev_name": "Malloc1" 00:08:38.766 } 00:08:38.766 ]' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.766 { 00:08:38.766 "nbd_device": "/dev/nbd0", 00:08:38.766 "bdev_name": "Malloc0" 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "nbd_device": "/dev/nbd1", 00:08:38.766 "bdev_name": "Malloc1" 00:08:38.766 } 00:08:38.766 ]' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.766 /dev/nbd1' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.766 /dev/nbd1' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.766 256+0 records in 00:08:38.766 256+0 records out 00:08:38.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057975 s, 181 MB/s 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.766 256+0 records in 00:08:38.766 256+0 records out 00:08:38.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274271 s, 38.2 MB/s 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.766 256+0 records in 00:08:38.766 256+0 records out 00:08:38.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213717 s, 49.1 MB/s 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:38.766 07:05:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.767 07:05:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.027 07:05:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.286 07:05:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.548 07:05:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:39.549 07:05:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:39.549 07:05:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:40.122 07:05:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:41.501 [2024-11-20 07:05:23.444982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.501 [2024-11-20 07:05:23.570200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.501 [2024-11-20 07:05:23.570200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.760 [2024-11-20 07:05:23.775252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:41.760 [2024-11-20 07:05:23.775360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:43.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.149 07:05:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58526 /var/tmp/spdk-nbd.sock 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.149 07:05:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:43.420 07:05:25 event.app_repeat -- event/event.sh@39 -- # killprocess 58526 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58526 ']' 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58526 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58526 00:08:43.420 killing process with pid 58526 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58526' 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58526 00:08:43.420 07:05:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58526 00:08:44.358 spdk_app_start is called in Round 0. 00:08:44.358 Shutdown signal received, stop current app iteration 00:08:44.358 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:08:44.358 spdk_app_start is called in Round 1. 00:08:44.358 Shutdown signal received, stop current app iteration 00:08:44.358 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:08:44.358 spdk_app_start is called in Round 2. 00:08:44.358 Shutdown signal received, stop current app iteration 00:08:44.358 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:08:44.358 spdk_app_start is called in Round 3. 00:08:44.358 Shutdown signal received, stop current app iteration 00:08:44.358 07:05:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:44.358 07:05:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:44.358 00:08:44.358 real 0m21.065s 00:08:44.358 user 0m45.800s 00:08:44.358 sys 0m3.047s 00:08:44.358 07:05:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.358 07:05:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:44.358 ************************************ 00:08:44.358 END TEST app_repeat 00:08:44.358 ************************************ 00:08:44.617 07:05:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:44.617 07:05:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:44.617 07:05:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.617 07:05:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.617 07:05:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.617 ************************************ 00:08:44.617 START TEST cpu_locks 00:08:44.617 ************************************ 00:08:44.617 07:05:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:44.617 * Looking for test storage... 00:08:44.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:44.617 07:05:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.617 07:05:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.617 07:05:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.876 07:05:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.876 --rc genhtml_branch_coverage=1 00:08:44.876 --rc genhtml_function_coverage=1 00:08:44.876 --rc genhtml_legend=1 00:08:44.876 --rc geninfo_all_blocks=1 00:08:44.876 --rc geninfo_unexecuted_blocks=1 00:08:44.876 00:08:44.876 ' 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.876 --rc genhtml_branch_coverage=1 00:08:44.876 --rc genhtml_function_coverage=1 00:08:44.876 --rc genhtml_legend=1 00:08:44.876 --rc geninfo_all_blocks=1 00:08:44.876 --rc geninfo_unexecuted_blocks=1 00:08:44.876 00:08:44.876 ' 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.876 --rc genhtml_branch_coverage=1 00:08:44.876 --rc genhtml_function_coverage=1 00:08:44.876 --rc genhtml_legend=1 00:08:44.876 --rc geninfo_all_blocks=1 00:08:44.876 --rc geninfo_unexecuted_blocks=1 00:08:44.876 00:08:44.876 ' 00:08:44.876 07:05:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.876 --rc genhtml_branch_coverage=1 00:08:44.876 --rc genhtml_function_coverage=1 00:08:44.876 --rc genhtml_legend=1 00:08:44.876 --rc geninfo_all_blocks=1 00:08:44.876 --rc geninfo_unexecuted_blocks=1 00:08:44.876 00:08:44.876 ' 00:08:44.876 07:05:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:44.876 07:05:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:44.876 07:05:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:44.876 07:05:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:44.877 07:05:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.877 07:05:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.877 07:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.877 ************************************ 00:08:44.877 START TEST default_locks 00:08:44.877 ************************************ 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58993 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58993 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58993 ']' 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.877 07:05:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.877 [2024-11-20 07:05:27.027047] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:44.877 [2024-11-20 07:05:27.027188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:08:45.136 [2024-11-20 07:05:27.203323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.136 [2024-11-20 07:05:27.328041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.097 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.097 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:46.097 07:05:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58993 00:08:46.097 07:05:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58993 00:08:46.097 07:05:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58993 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58993 ']' 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58993 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58993 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.356 killing process with pid 58993 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58993' 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58993 00:08:46.356 07:05:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58993 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58993 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58993 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58993 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58993 ']' 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58993) - No such process 00:08:49.643 ERROR: process (pid: 58993) is no longer running 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:49.643 00:08:49.643 real 0m4.407s 00:08:49.643 user 0m4.382s 00:08:49.643 sys 0m0.602s 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.643 07:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 ************************************ 00:08:49.643 END TEST default_locks 00:08:49.643 ************************************ 00:08:49.643 07:05:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:49.643 07:05:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.643 07:05:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.643 07:05:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 ************************************ 00:08:49.643 START TEST default_locks_via_rpc 00:08:49.643 ************************************ 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59068 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59068 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59068 ']' 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.643 07:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 [2024-11-20 07:05:31.534399] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:49.643 [2024-11-20 07:05:31.534579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:08:49.643 [2024-11-20 07:05:31.712891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.643 [2024-11-20 07:05:31.848783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59068 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59068 00:08:50.580 07:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59068 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59068 ']' 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59068 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59068 00:08:51.150 killing process with pid 59068 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59068' 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59068 00:08:51.150 07:05:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59068 00:08:54.439 ************************************ 00:08:54.439 END TEST default_locks_via_rpc 00:08:54.439 ************************************ 00:08:54.439 00:08:54.439 real 0m4.781s 00:08:54.439 user 0m4.765s 00:08:54.439 sys 0m0.720s 00:08:54.440 07:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.440 07:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.440 07:05:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:54.440 07:05:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.440 07:05:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.440 07:05:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.440 ************************************ 00:08:54.440 START TEST non_locking_app_on_locked_coremask 00:08:54.440 ************************************ 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59152 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59152 /var/tmp/spdk.sock 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59152 ']' 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.440 07:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.440 [2024-11-20 07:05:36.369216] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:54.440 [2024-11-20 07:05:36.369614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:08:54.440 [2024-11-20 07:05:36.559952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.698 [2024-11-20 07:05:36.727592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59172 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59172 /var/tmp/spdk2.sock 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59172 ']' 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.131 07:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.131 [2024-11-20 07:05:38.122299] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:56.131 [2024-11-20 07:05:38.122746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59172 ] 00:08:56.131 [2024-11-20 07:05:38.339750] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.131 [2024-11-20 07:05:38.339845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.701 [2024-11-20 07:05:38.691198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.249 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.249 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:59.249 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59152 00:08:59.249 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59152 00:08:59.249 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59152 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59152 ']' 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59152 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59152 00:08:59.507 killing process with pid 59152 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59152' 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59152 00:08:59.507 07:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59152 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59172 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59172 ']' 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59172 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59172 00:09:06.086 killing process with pid 59172 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59172' 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59172 00:09:06.086 07:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59172 00:09:08.658 00:09:08.658 real 0m14.230s 00:09:08.658 user 0m14.397s 00:09:08.658 sys 0m1.792s 00:09:08.658 07:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.658 07:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.658 ************************************ 00:09:08.658 END TEST non_locking_app_on_locked_coremask 00:09:08.658 ************************************ 00:09:08.658 07:05:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:08.658 07:05:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.658 07:05:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.658 07:05:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.658 ************************************ 00:09:08.658 START TEST locking_app_on_unlocked_coremask 00:09:08.658 ************************************ 00:09:08.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.658 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:08.658 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59343 00:09:08.658 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59343 /var/tmp/spdk.sock 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59343 ']' 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.659 07:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.659 [2024-11-20 07:05:50.652482] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:08.659 [2024-11-20 07:05:50.652758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:09:08.659 [2024-11-20 07:05:50.826453] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:08.659 [2024-11-20 07:05:50.826779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.917 [2024-11-20 07:05:51.005554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59368 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59368 /var/tmp/spdk2.sock 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.304 07:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.304 [2024-11-20 07:05:52.305792] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.304 [2024-11-20 07:05:52.306089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:09:10.304 [2024-11-20 07:05:52.503275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.881 [2024-11-20 07:05:52.838714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.455 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.455 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:13.455 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59368 00:09:13.455 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59368 00:09:13.455 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59343 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59343 ']' 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59343 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59343 00:09:13.713 killing process with pid 59343 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59343' 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59343 00:09:13.713 07:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59343 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59368 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59368 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59368 00:09:20.277 killing process with pid 59368 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59368' 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59368 00:09:20.277 07:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59368 00:09:22.834 00:09:22.834 real 0m13.985s 00:09:22.834 user 0m13.981s 00:09:22.834 sys 0m1.819s 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.834 ************************************ 00:09:22.834 END TEST locking_app_on_unlocked_coremask 00:09:22.834 ************************************ 00:09:22.834 07:06:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:22.834 07:06:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.834 07:06:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.834 07:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.834 ************************************ 00:09:22.834 START TEST locking_app_on_locked_coremask 00:09:22.834 ************************************ 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59534 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59534 /var/tmp/spdk.sock 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59534 ']' 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.834 07:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.834 [2024-11-20 07:06:04.694633] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:22.834 [2024-11-20 07:06:04.694853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:09:22.834 [2024-11-20 07:06:04.871867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.834 [2024-11-20 07:06:05.015668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59555 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59555 /var/tmp/spdk2.sock 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59555 /var/tmp/spdk2.sock 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59555 /var/tmp/spdk2.sock 00:09:23.770 07:06:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59555 ']' 00:09:23.770 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.770 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.770 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.770 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.770 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.074 [2024-11-20 07:06:06.093891] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:24.074 [2024-11-20 07:06:06.094112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59555 ] 00:09:24.074 [2024-11-20 07:06:06.280391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59534 has claimed it. 00:09:24.074 [2024-11-20 07:06:06.280485] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:24.659 ERROR: process (pid: 59555) is no longer running 00:09:24.659 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59555) - No such process 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59534 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59534 00:09:24.659 07:06:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59534 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59534 ']' 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59534 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59534 00:09:25.224 killing process with pid 59534 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59534' 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59534 00:09:25.224 07:06:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59534 00:09:27.779 ************************************ 00:09:27.779 END TEST locking_app_on_locked_coremask 00:09:27.779 ************************************ 00:09:27.779 00:09:27.779 real 0m5.363s 00:09:27.779 user 0m5.585s 00:09:27.779 sys 0m0.843s 00:09:27.779 07:06:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.779 07:06:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.779 07:06:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:27.779 07:06:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.779 07:06:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.779 07:06:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.779 ************************************ 00:09:27.779 START TEST locking_overlapped_coremask 00:09:27.779 ************************************ 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59625 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59625 /var/tmp/spdk.sock 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59625 ']' 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.779 07:06:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.038 [2024-11-20 07:06:10.133407] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:28.038 [2024-11-20 07:06:10.133678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:09:28.297 [2024-11-20 07:06:10.315032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.297 [2024-11-20 07:06:10.438403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.297 [2024-11-20 07:06:10.438488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.297 [2024-11-20 07:06:10.438547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59652 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59652 /var/tmp/spdk2.sock 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59652 /var/tmp/spdk2.sock 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:29.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59652 /var/tmp/spdk2.sock 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59652 ']' 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.235 07:06:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.235 [2024-11-20 07:06:11.432074] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:29.235 [2024-11-20 07:06:11.432279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:09:29.494 [2024-11-20 07:06:11.621321] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59625 has claimed it. 00:09:29.494 [2024-11-20 07:06:11.621656] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:30.061 ERROR: process (pid: 59652) is no longer running 00:09:30.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59652) - No such process 00:09:30.061 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59625 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59625 ']' 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59625 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59625 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59625' 00:09:30.062 killing process with pid 59625 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59625 00:09:30.062 07:06:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59625 00:09:32.628 00:09:32.628 real 0m4.667s 00:09:32.628 user 0m12.728s 00:09:32.628 sys 0m0.656s 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 ************************************ 00:09:32.628 END TEST locking_overlapped_coremask 00:09:32.628 ************************************ 00:09:32.628 07:06:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:32.628 07:06:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.628 07:06:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.628 07:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 ************************************ 00:09:32.628 START TEST locking_overlapped_coremask_via_rpc 00:09:32.628 ************************************ 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59721 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59721 /var/tmp/spdk.sock 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59721 ']' 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.628 07:06:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-20 07:06:14.882893] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:32.628 [2024-11-20 07:06:14.883830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:09:32.886 [2024-11-20 07:06:15.079604] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:32.886 [2024-11-20 07:06:15.079675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.144 [2024-11-20 07:06:15.204769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.144 [2024-11-20 07:06:15.204964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.144 [2024-11-20 07:06:15.205027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59739 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59739 /var/tmp/spdk2.sock 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.078 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.079 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.079 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.079 07:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.079 [2024-11-20 07:06:16.222531] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:34.079 [2024-11-20 07:06:16.222823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:09:34.337 [2024-11-20 07:06:16.410582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.337 [2024-11-20 07:06:16.414373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.595 [2024-11-20 07:06:16.740282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.595 [2024-11-20 07:06:16.743412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.595 [2024-11-20 07:06:16.743421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.126 [2024-11-20 07:06:18.969591] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59721 has claimed it. 00:09:37.126 request: 00:09:37.126 { 00:09:37.126 "method": "framework_enable_cpumask_locks", 00:09:37.126 "req_id": 1 00:09:37.126 } 00:09:37.126 Got JSON-RPC error response 00:09:37.126 response: 00:09:37.126 { 00:09:37.126 "code": -32603, 00:09:37.126 "message": "Failed to claim CPU core: 2" 00:09:37.126 } 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59721 /var/tmp/spdk.sock 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59721 ']' 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.126 07:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59739 /var/tmp/spdk2.sock 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.126 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:37.386 00:09:37.386 real 0m4.714s 00:09:37.386 user 0m1.627s 00:09:37.386 sys 0m0.222s 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.386 07:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.386 ************************************ 00:09:37.386 END TEST locking_overlapped_coremask_via_rpc 00:09:37.386 ************************************ 00:09:37.386 07:06:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:37.386 07:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59721 ]] 00:09:37.386 07:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59721 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59721 ']' 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59721 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59721 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59721' 00:09:37.386 killing process with pid 59721 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59721 00:09:37.386 07:06:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59721 00:09:40.713 07:06:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59739 ]] 00:09:40.713 07:06:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59739 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59739 ']' 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59739 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59739 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59739' 00:09:40.713 killing process with pid 59739 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59739 00:09:40.713 07:06:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59739 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59721 ]] 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59721 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59721 ']' 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59721 00:09:43.250 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59721) - No such process 00:09:43.250 Process with pid 59721 is not found 00:09:43.250 Process with pid 59739 is not found 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59721 is not found' 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59739 ]] 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59739 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59739 ']' 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59739 00:09:43.250 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59739) - No such process 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59739 is not found' 00:09:43.250 07:06:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:43.250 00:09:43.250 real 0m58.516s 00:09:43.250 user 1m37.137s 00:09:43.250 sys 0m8.090s 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.250 07:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:43.250 ************************************ 00:09:43.250 END TEST cpu_locks 00:09:43.250 ************************************ 00:09:43.250 00:09:43.250 real 1m32.516s 00:09:43.250 user 2m44.664s 00:09:43.250 sys 0m12.464s 00:09:43.250 07:06:25 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.250 07:06:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:43.250 ************************************ 00:09:43.250 END TEST event 00:09:43.250 ************************************ 00:09:43.250 07:06:25 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:43.250 07:06:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.250 07:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.250 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:09:43.250 ************************************ 00:09:43.250 START TEST thread 00:09:43.250 ************************************ 00:09:43.250 07:06:25 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:43.250 * Looking for test storage... 00:09:43.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:43.250 07:06:25 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.250 07:06:25 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.250 07:06:25 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.509 07:06:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.509 07:06:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.509 07:06:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.509 07:06:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.509 07:06:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.509 07:06:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.509 07:06:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.509 07:06:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.509 07:06:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.509 07:06:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.509 07:06:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.509 07:06:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:43.509 07:06:25 thread -- scripts/common.sh@345 -- # : 1 00:09:43.509 07:06:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.509 07:06:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.509 07:06:25 thread -- scripts/common.sh@365 -- # decimal 1 00:09:43.509 07:06:25 thread -- scripts/common.sh@353 -- # local d=1 00:09:43.509 07:06:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.509 07:06:25 thread -- scripts/common.sh@355 -- # echo 1 00:09:43.509 07:06:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.509 07:06:25 thread -- scripts/common.sh@366 -- # decimal 2 00:09:43.509 07:06:25 thread -- scripts/common.sh@353 -- # local d=2 00:09:43.509 07:06:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.509 07:06:25 thread -- scripts/common.sh@355 -- # echo 2 00:09:43.509 07:06:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.509 07:06:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.509 07:06:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.509 07:06:25 thread -- scripts/common.sh@368 -- # return 0 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.509 --rc genhtml_branch_coverage=1 00:09:43.509 --rc genhtml_function_coverage=1 00:09:43.509 --rc genhtml_legend=1 00:09:43.509 --rc geninfo_all_blocks=1 00:09:43.509 --rc geninfo_unexecuted_blocks=1 00:09:43.509 00:09:43.509 ' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.509 --rc genhtml_branch_coverage=1 00:09:43.509 --rc genhtml_function_coverage=1 00:09:43.509 --rc genhtml_legend=1 00:09:43.509 --rc geninfo_all_blocks=1 00:09:43.509 --rc geninfo_unexecuted_blocks=1 00:09:43.509 00:09:43.509 ' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.509 --rc genhtml_branch_coverage=1 00:09:43.509 --rc genhtml_function_coverage=1 00:09:43.509 --rc genhtml_legend=1 00:09:43.509 --rc geninfo_all_blocks=1 00:09:43.509 --rc geninfo_unexecuted_blocks=1 00:09:43.509 00:09:43.509 ' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.509 --rc genhtml_branch_coverage=1 00:09:43.509 --rc genhtml_function_coverage=1 00:09:43.509 --rc genhtml_legend=1 00:09:43.509 --rc geninfo_all_blocks=1 00:09:43.509 --rc geninfo_unexecuted_blocks=1 00:09:43.509 00:09:43.509 ' 00:09:43.509 07:06:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:43.509 07:06:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.510 07:06:25 thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.510 ************************************ 00:09:43.510 START TEST thread_poller_perf 00:09:43.510 ************************************ 00:09:43.510 07:06:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:43.510 [2024-11-20 07:06:25.615215] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:43.510 [2024-11-20 07:06:25.615368] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:09:43.769 [2024-11-20 07:06:25.795309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.769 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:43.769 [2024-11-20 07:06:25.933496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.149 [2024-11-20T07:06:27.414Z] ====================================== 00:09:45.149 [2024-11-20T07:06:27.414Z] busy:2303679150 (cyc) 00:09:45.149 [2024-11-20T07:06:27.414Z] total_run_count: 351000 00:09:45.149 [2024-11-20T07:06:27.414Z] tsc_hz: 2290000000 (cyc) 00:09:45.149 [2024-11-20T07:06:27.414Z] ====================================== 00:09:45.149 [2024-11-20T07:06:27.414Z] poller_cost: 6563 (cyc), 2865 (nsec) 00:09:45.149 00:09:45.149 real 0m1.660s 00:09:45.149 user 0m1.450s 00:09:45.149 sys 0m0.100s 00:09:45.149 ************************************ 00:09:45.149 END TEST thread_poller_perf 00:09:45.149 ************************************ 00:09:45.149 07:06:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.149 07:06:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:45.149 07:06:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:45.149 07:06:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:45.149 07:06:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.149 07:06:27 thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.149 ************************************ 00:09:45.149 START TEST thread_poller_perf 00:09:45.149 ************************************ 00:09:45.149 07:06:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:45.149 [2024-11-20 07:06:27.343491] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:45.149 [2024-11-20 07:06:27.343720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 00:09:45.408 [2024-11-20 07:06:27.527505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.668 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:45.668 [2024-11-20 07:06:27.676457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.048 [2024-11-20T07:06:29.313Z] ====================================== 00:09:47.048 [2024-11-20T07:06:29.313Z] busy:2295094626 (cyc) 00:09:47.048 [2024-11-20T07:06:29.313Z] total_run_count: 4501000 00:09:47.048 [2024-11-20T07:06:29.313Z] tsc_hz: 2290000000 (cyc) 00:09:47.048 [2024-11-20T07:06:29.313Z] ====================================== 00:09:47.048 [2024-11-20T07:06:29.313Z] poller_cost: 509 (cyc), 222 (nsec) 00:09:47.048 00:09:47.048 real 0m1.653s 00:09:47.048 user 0m1.418s 00:09:47.048 sys 0m0.125s 00:09:47.048 07:06:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.048 ************************************ 00:09:47.048 END TEST thread_poller_perf 00:09:47.048 ************************************ 00:09:47.048 07:06:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:47.048 07:06:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:47.048 ************************************ 00:09:47.048 END TEST thread 00:09:47.048 ************************************ 00:09:47.048 00:09:47.048 real 0m3.680s 00:09:47.048 user 0m3.024s 00:09:47.048 sys 0m0.446s 00:09:47.048 07:06:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.048 07:06:28 thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.048 07:06:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:47.048 07:06:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:47.048 07:06:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.048 07:06:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.048 07:06:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.048 ************************************ 00:09:47.048 START TEST app_cmdline 00:09:47.048 ************************************ 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:47.048 * Looking for test storage... 00:09:47.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.048 07:06:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.048 --rc genhtml_branch_coverage=1 00:09:47.048 --rc genhtml_function_coverage=1 00:09:47.048 --rc genhtml_legend=1 00:09:47.048 --rc geninfo_all_blocks=1 00:09:47.048 --rc geninfo_unexecuted_blocks=1 00:09:47.048 00:09:47.048 ' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.048 --rc genhtml_branch_coverage=1 00:09:47.048 --rc genhtml_function_coverage=1 00:09:47.048 --rc genhtml_legend=1 00:09:47.048 --rc geninfo_all_blocks=1 00:09:47.048 --rc geninfo_unexecuted_blocks=1 00:09:47.048 00:09:47.048 ' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.048 --rc genhtml_branch_coverage=1 00:09:47.048 --rc genhtml_function_coverage=1 00:09:47.048 --rc genhtml_legend=1 00:09:47.048 --rc geninfo_all_blocks=1 00:09:47.048 --rc geninfo_unexecuted_blocks=1 00:09:47.048 00:09:47.048 ' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.048 --rc genhtml_branch_coverage=1 00:09:47.048 --rc genhtml_function_coverage=1 00:09:47.048 --rc genhtml_legend=1 00:09:47.048 --rc geninfo_all_blocks=1 00:09:47.048 --rc geninfo_unexecuted_blocks=1 00:09:47.048 00:09:47.048 ' 00:09:47.048 07:06:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:47.048 07:06:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60071 00:09:47.048 07:06:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:47.048 07:06:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60071 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60071 ']' 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.048 07:06:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:47.308 [2024-11-20 07:06:29.384809] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:47.308 [2024-11-20 07:06:29.385020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:09:47.308 [2024-11-20 07:06:29.563369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.567 [2024-11-20 07:06:29.714768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.943 07:06:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.943 07:06:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:48.943 07:06:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:48.943 { 00:09:48.943 "version": "SPDK v25.01-pre git sha1 6fc96a60f", 00:09:48.943 "fields": { 00:09:48.943 "major": 25, 00:09:48.943 "minor": 1, 00:09:48.943 "patch": 0, 00:09:48.943 "suffix": "-pre", 00:09:48.943 "commit": "6fc96a60f" 00:09:48.943 } 00:09:48.943 } 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:48.943 07:06:31 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.943 07:06:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:48.943 07:06:31 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.943 07:06:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:48.944 07:06:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:48.944 07:06:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:48.944 07:06:31 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:49.203 request: 00:09:49.203 { 00:09:49.203 "method": "env_dpdk_get_mem_stats", 00:09:49.203 "req_id": 1 00:09:49.203 } 00:09:49.203 Got JSON-RPC error response 00:09:49.203 response: 00:09:49.203 { 00:09:49.203 "code": -32601, 00:09:49.203 "message": "Method not found" 00:09:49.203 } 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.203 07:06:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60071 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60071 ']' 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60071 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60071 00:09:49.203 killing process with pid 60071 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.203 07:06:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.204 07:06:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60071' 00:09:49.204 07:06:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 60071 00:09:49.204 07:06:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 60071 00:09:52.492 00:09:52.492 real 0m5.288s 00:09:52.492 user 0m5.421s 00:09:52.492 sys 0m0.791s 00:09:52.492 07:06:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.492 07:06:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:52.492 ************************************ 00:09:52.492 END TEST app_cmdline 00:09:52.492 ************************************ 00:09:52.492 07:06:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:52.492 07:06:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.492 07:06:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.492 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:09:52.492 ************************************ 00:09:52.492 START TEST version 00:09:52.492 ************************************ 00:09:52.492 07:06:34 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:52.492 * Looking for test storage... 00:09:52.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:52.492 07:06:34 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.492 07:06:34 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.492 07:06:34 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.492 07:06:34 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.492 07:06:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.492 07:06:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.492 07:06:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.492 07:06:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.492 07:06:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.492 07:06:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.492 07:06:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.492 07:06:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.492 07:06:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.492 07:06:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.492 07:06:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.492 07:06:34 version -- scripts/common.sh@344 -- # case "$op" in 00:09:52.492 07:06:34 version -- scripts/common.sh@345 -- # : 1 00:09:52.492 07:06:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.492 07:06:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.492 07:06:34 version -- scripts/common.sh@365 -- # decimal 1 00:09:52.493 07:06:34 version -- scripts/common.sh@353 -- # local d=1 00:09:52.493 07:06:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.493 07:06:34 version -- scripts/common.sh@355 -- # echo 1 00:09:52.493 07:06:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.493 07:06:34 version -- scripts/common.sh@366 -- # decimal 2 00:09:52.493 07:06:34 version -- scripts/common.sh@353 -- # local d=2 00:09:52.493 07:06:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.493 07:06:34 version -- scripts/common.sh@355 -- # echo 2 00:09:52.493 07:06:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.493 07:06:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.493 07:06:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.493 07:06:34 version -- scripts/common.sh@368 -- # return 0 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.493 --rc genhtml_branch_coverage=1 00:09:52.493 --rc genhtml_function_coverage=1 00:09:52.493 --rc genhtml_legend=1 00:09:52.493 --rc geninfo_all_blocks=1 00:09:52.493 --rc geninfo_unexecuted_blocks=1 00:09:52.493 00:09:52.493 ' 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.493 --rc genhtml_branch_coverage=1 00:09:52.493 --rc genhtml_function_coverage=1 00:09:52.493 --rc genhtml_legend=1 00:09:52.493 --rc geninfo_all_blocks=1 00:09:52.493 --rc geninfo_unexecuted_blocks=1 00:09:52.493 00:09:52.493 ' 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.493 --rc genhtml_branch_coverage=1 00:09:52.493 --rc genhtml_function_coverage=1 00:09:52.493 --rc genhtml_legend=1 00:09:52.493 --rc geninfo_all_blocks=1 00:09:52.493 --rc geninfo_unexecuted_blocks=1 00:09:52.493 00:09:52.493 ' 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.493 --rc genhtml_branch_coverage=1 00:09:52.493 --rc genhtml_function_coverage=1 00:09:52.493 --rc genhtml_legend=1 00:09:52.493 --rc geninfo_all_blocks=1 00:09:52.493 --rc geninfo_unexecuted_blocks=1 00:09:52.493 00:09:52.493 ' 00:09:52.493 07:06:34 version -- app/version.sh@17 -- # get_header_version major 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:09:52.493 07:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # cut -f2 00:09:52.493 07:06:34 version -- app/version.sh@17 -- # major=25 00:09:52.493 07:06:34 version -- app/version.sh@18 -- # get_header_version minor 00:09:52.493 07:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # cut -f2 00:09:52.493 07:06:34 version -- app/version.sh@18 -- # minor=1 00:09:52.493 07:06:34 version -- app/version.sh@19 -- # get_header_version patch 00:09:52.493 07:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # cut -f2 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:09:52.493 07:06:34 version -- app/version.sh@19 -- # patch=0 00:09:52.493 07:06:34 version -- app/version.sh@20 -- # get_header_version suffix 00:09:52.493 07:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # cut -f2 00:09:52.493 07:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:09:52.493 07:06:34 version -- app/version.sh@20 -- # suffix=-pre 00:09:52.493 07:06:34 version -- app/version.sh@22 -- # version=25.1 00:09:52.493 07:06:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:52.493 07:06:34 version -- app/version.sh@28 -- # version=25.1rc0 00:09:52.493 07:06:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:52.493 07:06:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:52.493 07:06:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:52.493 07:06:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:52.493 ************************************ 00:09:52.493 END TEST version 00:09:52.493 ************************************ 00:09:52.493 00:09:52.493 real 0m0.301s 00:09:52.493 user 0m0.186s 00:09:52.493 sys 0m0.159s 00:09:52.493 07:06:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.493 07:06:34 version -- common/autotest_common.sh@10 -- # set +x 00:09:52.751 07:06:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:52.751 07:06:34 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:52.751 07:06:34 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:52.751 07:06:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.751 07:06:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.751 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:09:52.751 ************************************ 00:09:52.751 START TEST bdev_raid 00:09:52.751 ************************************ 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:52.751 * Looking for test storage... 00:09:52.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.751 07:06:34 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.751 --rc genhtml_branch_coverage=1 00:09:52.751 --rc genhtml_function_coverage=1 00:09:52.751 --rc genhtml_legend=1 00:09:52.751 --rc geninfo_all_blocks=1 00:09:52.751 --rc geninfo_unexecuted_blocks=1 00:09:52.751 00:09:52.751 ' 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.751 --rc genhtml_branch_coverage=1 00:09:52.751 --rc genhtml_function_coverage=1 00:09:52.751 --rc genhtml_legend=1 00:09:52.751 --rc geninfo_all_blocks=1 00:09:52.751 --rc geninfo_unexecuted_blocks=1 00:09:52.751 00:09:52.751 ' 00:09:52.751 07:06:34 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.751 --rc genhtml_branch_coverage=1 00:09:52.751 --rc genhtml_function_coverage=1 00:09:52.751 --rc genhtml_legend=1 00:09:52.751 --rc geninfo_all_blocks=1 00:09:52.751 --rc geninfo_unexecuted_blocks=1 00:09:52.751 00:09:52.751 ' 00:09:52.752 07:06:34 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.752 --rc genhtml_branch_coverage=1 00:09:52.752 --rc genhtml_function_coverage=1 00:09:52.752 --rc genhtml_legend=1 00:09:52.752 --rc geninfo_all_blocks=1 00:09:52.752 --rc geninfo_unexecuted_blocks=1 00:09:52.752 00:09:52.752 ' 00:09:52.752 07:06:34 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:52.752 07:06:34 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:52.752 07:06:34 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:52.752 07:06:35 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:52.752 07:06:35 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:52.752 07:06:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:52.752 07:06:35 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:52.752 07:06:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.752 07:06:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.752 07:06:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.011 ************************************ 00:09:53.011 START TEST raid1_resize_data_offset_test 00:09:53.011 ************************************ 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60275 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60275' 00:09:53.011 Process raid pid: 60275 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60275 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60275 ']' 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.011 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.011 [2024-11-20 07:06:35.132057] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:53.011 [2024-11-20 07:06:35.132520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.270 [2024-11-20 07:06:35.326344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.270 [2024-11-20 07:06:35.475492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.530 [2024-11-20 07:06:35.729434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.530 [2024-11-20 07:06:35.729517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.788 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.788 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.788 07:06:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:53.788 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.788 07:06:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 malloc0 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 malloc1 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 null0 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 [2024-11-20 07:06:36.207070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:54.048 [2024-11-20 07:06:36.209252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:54.048 [2024-11-20 07:06:36.209300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:54.048 [2024-11-20 07:06:36.209499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.048 [2024-11-20 07:06:36.209515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:54.048 [2024-11-20 07:06:36.209802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:54.048 [2024-11-20 07:06:36.210006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.048 [2024-11-20 07:06:36.210021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:54.048 [2024-11-20 07:06:36.210187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.048 [2024-11-20 07:06:36.266999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.048 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.985 malloc2 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.985 [2024-11-20 07:06:36.924857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:54.985 [2024-11-20 07:06:36.944011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.985 [2024-11-20 07:06:36.946193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60275 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60275 ']' 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60275 00:09:54.985 07:06:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:54.985 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.985 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60275 00:09:54.985 killing process with pid 60275 00:09:54.985 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.986 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.986 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60275' 00:09:54.986 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60275 00:09:54.986 07:06:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60275 00:09:54.986 [2024-11-20 07:06:37.035384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.986 [2024-11-20 07:06:37.035768] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:54.986 [2024-11-20 07:06:37.035839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.986 [2024-11-20 07:06:37.035858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:54.986 [2024-11-20 07:06:37.078845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.986 [2024-11-20 07:06:37.079230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.986 [2024-11-20 07:06:37.079249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:56.889 [2024-11-20 07:06:39.149672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.267 ************************************ 00:09:58.267 END TEST raid1_resize_data_offset_test 00:09:58.267 ************************************ 00:09:58.267 07:06:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:58.267 00:09:58.267 real 0m5.452s 00:09:58.267 user 0m5.143s 00:09:58.267 sys 0m0.776s 00:09:58.267 07:06:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.267 07:06:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.267 07:06:40 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:58.267 07:06:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.267 07:06:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.267 07:06:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.525 ************************************ 00:09:58.525 START TEST raid0_resize_superblock_test 00:09:58.525 ************************************ 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60364 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60364' 00:09:58.525 Process raid pid: 60364 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60364 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60364 ']' 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.525 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.526 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.526 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.526 07:06:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.526 [2024-11-20 07:06:40.645478] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:58.526 [2024-11-20 07:06:40.645666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.784 [2024-11-20 07:06:40.848229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.784 [2024-11-20 07:06:40.999961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.043 [2024-11-20 07:06:41.254644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.043 [2024-11-20 07:06:41.254710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.302 07:06:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.302 07:06:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.561 07:06:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:59.561 07:06:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.561 07:06:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.158 malloc0 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.158 [2024-11-20 07:06:42.244321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:00.158 [2024-11-20 07:06:42.244434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.158 [2024-11-20 07:06:42.244470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:00.158 [2024-11-20 07:06:42.244484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.158 [2024-11-20 07:06:42.247201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.158 [2024-11-20 07:06:42.247243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:00.158 pt0 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.158 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 55cd32cf-1390-49b8-88f9-86fbd1f1379a 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 649ba3e2-df28-492c-8b2f-555d338946da 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 f3cde864-d8d8-46b7-8586-b7e2a2942d4c 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 [2024-11-20 07:06:42.460937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 649ba3e2-df28-492c-8b2f-555d338946da is claimed 00:10:00.419 [2024-11-20 07:06:42.461066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f3cde864-d8d8-46b7-8586-b7e2a2942d4c is claimed 00:10:00.419 [2024-11-20 07:06:42.461206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.419 [2024-11-20 07:06:42.461224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:00.419 [2024-11-20 07:06:42.461620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.419 [2024-11-20 07:06:42.461850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.419 [2024-11-20 07:06:42.461871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:00.419 [2024-11-20 07:06:42.462053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:00.419 [2024-11-20 07:06:42.580987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 [2024-11-20 07:06:42.628997] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:00.419 [2024-11-20 07:06:42.629047] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '649ba3e2-df28-492c-8b2f-555d338946da' was resized: old size 131072, new size 204800 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 [2024-11-20 07:06:42.640824] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:00.419 [2024-11-20 07:06:42.640861] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f3cde864-d8d8-46b7-8586-b7e2a2942d4c' was resized: old size 131072, new size 204800 00:10:00.419 [2024-11-20 07:06:42.640886] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.678 [2024-11-20 07:06:42.745047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.678 [2024-11-20 07:06:42.788469] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:00.678 [2024-11-20 07:06:42.788570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:00.678 [2024-11-20 07:06:42.788596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.678 [2024-11-20 07:06:42.788617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:00.678 [2024-11-20 07:06:42.788761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.678 [2024-11-20 07:06:42.788800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.678 [2024-11-20 07:06:42.788814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.678 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.678 [2024-11-20 07:06:42.800300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:00.678 [2024-11-20 07:06:42.800398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.678 [2024-11-20 07:06:42.800427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:00.678 [2024-11-20 07:06:42.800440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.678 [2024-11-20 07:06:42.803097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.678 [2024-11-20 07:06:42.803140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:00.678 [2024-11-20 07:06:42.805111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 649ba3e2-df28-492c-8b2f-555d338946da 00:10:00.678 [2024-11-20 07:06:42.805204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 649ba3e2-df28-492c-8b2f-555d338946da is claimed 00:10:00.678 [2024-11-20 07:06:42.805377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f3cde864-d8d8-46b7-8586-b7e2a2942d4c 00:10:00.678 [2024-11-20 07:06:42.805405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f3cde864-d8d8-46b7-8586-b7e2a2942d4c is claimed 00:10:00.678 [2024-11-20 07:06:42.805621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f3cde864-d8d8-46b7-8586-b7e2a2942d4c (2) smaller than existing raid bdev Raid (3) 00:10:00.678 [2024-11-20 07:06:42.805659] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 649ba3e2-df28-492c-8b2f-555d338946da: File exists 00:10:00.678 [2024-11-20 07:06:42.805700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:00.678 [2024-11-20 07:06:42.805713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:00.678 pt0 00:10:00.678 [2024-11-20 07:06:42.805997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:00.678 [2024-11-20 07:06:42.806182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:00.679 [2024-11-20 07:06:42.806192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:00.679 [2024-11-20 07:06:42.806419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.679 [2024-11-20 07:06:42.829411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60364 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60364 ']' 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60364 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60364 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.679 killing process with pid 60364 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60364' 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60364 00:10:00.679 [2024-11-20 07:06:42.915627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.679 [2024-11-20 07:06:42.915746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.679 07:06:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60364 00:10:00.679 [2024-11-20 07:06:42.915797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.679 [2024-11-20 07:06:42.915808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:02.581 [2024-11-20 07:06:44.529109] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.962 07:06:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:03.962 00:10:03.962 real 0m5.262s 00:10:03.962 user 0m5.318s 00:10:03.962 sys 0m0.829s 00:10:03.962 07:06:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.962 ************************************ 00:10:03.962 END TEST raid0_resize_superblock_test 00:10:03.962 07:06:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.962 ************************************ 00:10:03.962 07:06:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:10:03.962 07:06:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.962 07:06:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.962 07:06:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.962 ************************************ 00:10:03.962 START TEST raid1_resize_superblock_test 00:10:03.962 ************************************ 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60468 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.962 Process raid pid: 60468 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60468' 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60468 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60468 ']' 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.962 07:06:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.962 [2024-11-20 07:06:45.974061] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:03.962 [2024-11-20 07:06:45.974220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.962 [2024-11-20 07:06:46.157618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.221 [2024-11-20 07:06:46.290373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.481 [2024-11-20 07:06:46.526983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.481 [2024-11-20 07:06:46.527033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.741 07:06:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.741 07:06:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.741 07:06:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:04.741 07:06:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.741 07:06:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 malloc0 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 [2024-11-20 07:06:47.406687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:05.310 [2024-11-20 07:06:47.406793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.310 [2024-11-20 07:06:47.406827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:05.310 [2024-11-20 07:06:47.406850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.310 [2024-11-20 07:06:47.409724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.310 [2024-11-20 07:06:47.409778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:05.310 pt0 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 fb70d7f4-395a-4288-96af-28bef3b25fea 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 7ceb219e-9730-40e2-904c-e797a4a3e081 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 5eac984c-a5f7-470b-a157-7fb1fcf4494e 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.310 [2024-11-20 07:06:47.544577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ceb219e-9730-40e2-904c-e797a4a3e081 is claimed 00:10:05.310 [2024-11-20 07:06:47.544697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5eac984c-a5f7-470b-a157-7fb1fcf4494e is claimed 00:10:05.310 [2024-11-20 07:06:47.544844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:05.310 [2024-11-20 07:06:47.544862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:10:05.310 [2024-11-20 07:06:47.545149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.310 [2024-11-20 07:06:47.545384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:05.310 [2024-11-20 07:06:47.545404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:05.310 [2024-11-20 07:06:47.545583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:05.310 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.311 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:05.311 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:10:05.570 [2024-11-20 07:06:47.660621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.570 [2024-11-20 07:06:47.708544] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:05.570 [2024-11-20 07:06:47.708582] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7ceb219e-9730-40e2-904c-e797a4a3e081' was resized: old size 131072, new size 204800 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.570 [2024-11-20 07:06:47.720425] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:05.570 [2024-11-20 07:06:47.720458] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5eac984c-a5f7-470b-a157-7fb1fcf4494e' was resized: old size 131072, new size 204800 00:10:05.570 [2024-11-20 07:06:47.720483] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:05.570 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.571 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.571 [2024-11-20 07:06:47.820376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.831 [2024-11-20 07:06:47.848109] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:05.831 [2024-11-20 07:06:47.848211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:05.831 [2024-11-20 07:06:47.848241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:05.831 [2024-11-20 07:06:47.848417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.831 [2024-11-20 07:06:47.848633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.831 [2024-11-20 07:06:47.848707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.831 [2024-11-20 07:06:47.848721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.831 [2024-11-20 07:06:47.859965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:05.831 [2024-11-20 07:06:47.860030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.831 [2024-11-20 07:06:47.860055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:05.831 [2024-11-20 07:06:47.860069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.831 [2024-11-20 07:06:47.862557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.831 [2024-11-20 07:06:47.862599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:05.831 [2024-11-20 07:06:47.864729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7ceb219e-9730-40e2-904c-e797a4a3e081 00:10:05.831 [2024-11-20 07:06:47.864839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7ceb219e-9730-40e2-904c-e797a4a3e081 is claimed 00:10:05.831 [2024-11-20 07:06:47.865016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5eac984c-a5f7-470b-a157-7fb1fcf4494e 00:10:05.831 [2024-11-20 07:06:47.865058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5eac984c-a5f7-470b-a157-7fb1fcf4494e is claimed 00:10:05.831 [2024-11-20 07:06:47.865291] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5eac984c-a5f7-470b-a157-7fb1fcf4494e (2) smaller than existing raid bdev Raid (3) 00:10:05.831 [2024-11-20 07:06:47.865354] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7ceb219e-9730-40e2-904c-e797a4a3e081: File exists 00:10:05.831 [2024-11-20 07:06:47.865417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:05.831 [2024-11-20 07:06:47.865436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:05.831 pt0 00:10:05.831 [2024-11-20 07:06:47.865727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:05.831 [2024-11-20 07:06:47.865904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:05.831 [2024-11-20 07:06:47.865921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:05.831 [2024-11-20 07:06:47.866091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:10:05.831 [2024-11-20 07:06:47.888968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60468 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60468 ']' 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60468 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60468 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.831 killing process with pid 60468 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60468' 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60468 00:10:05.831 [2024-11-20 07:06:47.972399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.831 [2024-11-20 07:06:47.972509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.831 07:06:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60468 00:10:05.831 [2024-11-20 07:06:47.972599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.831 [2024-11-20 07:06:47.972611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:07.745 [2024-11-20 07:06:49.602103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.756 07:06:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:08.756 00:10:08.756 real 0m4.954s 00:10:08.756 user 0m5.109s 00:10:08.756 sys 0m0.617s 00:10:08.756 07:06:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.756 07:06:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.756 ************************************ 00:10:08.756 END TEST raid1_resize_superblock_test 00:10:08.756 ************************************ 00:10:08.756 07:06:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:10:08.756 07:06:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:10:08.756 07:06:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:10:08.756 07:06:50 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:10:08.756 07:06:50 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:10:08.757 07:06:50 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:08.757 07:06:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.757 07:06:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.757 07:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.757 ************************************ 00:10:08.757 START TEST raid_function_test_raid0 00:10:08.757 ************************************ 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60571 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60571' 00:10:08.757 Process raid pid: 60571 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60571 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60571 ']' 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.757 07:06:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:09.016 [2024-11-20 07:06:51.022039] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:09.016 [2024-11-20 07:06:51.022190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.016 [2024-11-20 07:06:51.210398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.275 [2024-11-20 07:06:51.344188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.534 [2024-11-20 07:06:51.578140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.534 [2024-11-20 07:06:51.578191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:09.793 Base_1 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:09.793 Base_2 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:09.793 [2024-11-20 07:06:51.993097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:09.793 [2024-11-20 07:06:51.995237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:09.793 [2024-11-20 07:06:51.995333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:09.793 [2024-11-20 07:06:51.995359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:09.793 [2024-11-20 07:06:51.995654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:09.793 [2024-11-20 07:06:51.995832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:09.793 [2024-11-20 07:06:51.995849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:09.793 [2024-11-20 07:06:51.996030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.793 07:06:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:09.793 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:10.051 [2024-11-20 07:06:52.268739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:10.051 /dev/nbd0 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:10.310 1+0 records in 00:10:10.310 1+0 records out 00:10:10.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439631 s, 9.3 MB/s 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:10.310 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:10.569 { 00:10:10.569 "nbd_device": "/dev/nbd0", 00:10:10.569 "bdev_name": "raid" 00:10:10.569 } 00:10:10.569 ]' 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:10.569 { 00:10:10.569 "nbd_device": "/dev/nbd0", 00:10:10.569 "bdev_name": "raid" 00:10:10.569 } 00:10:10.569 ]' 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:10.569 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:10.570 4096+0 records in 00:10:10.570 4096+0 records out 00:10:10.570 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0329879 s, 63.6 MB/s 00:10:10.570 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:10.829 4096+0 records in 00:10:10.829 4096+0 records out 00:10:10.829 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.238518 s, 8.8 MB/s 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:10.829 128+0 records in 00:10:10.829 128+0 records out 00:10:10.829 65536 bytes (66 kB, 64 KiB) copied, 0.00125088 s, 52.4 MB/s 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:10.829 07:06:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:10.829 2035+0 records in 00:10:10.829 2035+0 records out 00:10:10.829 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0133798 s, 77.9 MB/s 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:10.829 456+0 records in 00:10:10.829 456+0 records out 00:10:10.829 233472 bytes (233 kB, 228 KiB) copied, 0.00403696 s, 57.8 MB/s 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.829 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:11.088 [2024-11-20 07:06:53.300837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:11.088 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:11.346 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:11.346 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:11.346 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:11.604 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:11.604 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:11.604 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60571 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60571 ']' 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60571 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60571 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.605 killing process with pid 60571 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60571' 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60571 00:10:11.605 [2024-11-20 07:06:53.675510] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.605 [2024-11-20 07:06:53.675626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.605 07:06:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60571 00:10:11.605 [2024-11-20 07:06:53.675684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.605 [2024-11-20 07:06:53.675702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:11.864 [2024-11-20 07:06:53.894993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.238 07:06:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:13.238 00:10:13.238 real 0m4.166s 00:10:13.238 user 0m4.857s 00:10:13.238 sys 0m1.080s 00:10:13.238 07:06:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.238 07:06:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:13.238 ************************************ 00:10:13.238 END TEST raid_function_test_raid0 00:10:13.238 ************************************ 00:10:13.238 07:06:55 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:13.238 07:06:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.238 07:06:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.238 07:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.238 ************************************ 00:10:13.238 START TEST raid_function_test_concat 00:10:13.238 ************************************ 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60700 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.238 Process raid pid: 60700 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60700' 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60700 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60700 ']' 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.238 07:06:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:13.238 [2024-11-20 07:06:55.244365] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:13.238 [2024-11-20 07:06:55.244477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.238 [2024-11-20 07:06:55.418890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.496 [2024-11-20 07:06:55.537405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.496 [2024-11-20 07:06:55.749855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.496 [2024-11-20 07:06:55.749910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:14.066 Base_1 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:14.066 Base_2 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:14.066 [2024-11-20 07:06:56.204979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:14.066 [2024-11-20 07:06:56.206706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:14.066 [2024-11-20 07:06:56.206774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:14.066 [2024-11-20 07:06:56.206785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:14.066 [2024-11-20 07:06:56.207020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.066 [2024-11-20 07:06:56.207173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:14.066 [2024-11-20 07:06:56.207188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:14.066 [2024-11-20 07:06:56.207321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:14.066 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:14.326 [2024-11-20 07:06:56.440644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:14.326 /dev/nbd0 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:14.326 1+0 records in 00:10:14.326 1+0 records out 00:10:14.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423032 s, 9.7 MB/s 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:14.326 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:14.587 { 00:10:14.587 "nbd_device": "/dev/nbd0", 00:10:14.587 "bdev_name": "raid" 00:10:14.587 } 00:10:14.587 ]' 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:14.587 { 00:10:14.587 "nbd_device": "/dev/nbd0", 00:10:14.587 "bdev_name": "raid" 00:10:14.587 } 00:10:14.587 ]' 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:14.587 4096+0 records in 00:10:14.587 4096+0 records out 00:10:14.587 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0346534 s, 60.5 MB/s 00:10:14.587 07:06:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:14.847 4096+0 records in 00:10:14.847 4096+0 records out 00:10:14.847 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212022 s, 9.9 MB/s 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:14.847 128+0 records in 00:10:14.847 128+0 records out 00:10:14.847 65536 bytes (66 kB, 64 KiB) copied, 0.00113216 s, 57.9 MB/s 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:14.847 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:15.107 2035+0 records in 00:10:15.107 2035+0 records out 00:10:15.107 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144607 s, 72.1 MB/s 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:15.107 456+0 records in 00:10:15.107 456+0 records out 00:10:15.107 233472 bytes (233 kB, 228 KiB) copied, 0.00364167 s, 64.1 MB/s 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.107 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:15.367 [2024-11-20 07:06:57.429106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:15.367 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60700 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60700 ']' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60700 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60700 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.626 killing process with pid 60700 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60700' 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60700 00:10:15.626 [2024-11-20 07:06:57.778754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.626 [2024-11-20 07:06:57.778871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.626 [2024-11-20 07:06:57.778925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.626 [2024-11-20 07:06:57.778939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:15.626 07:06:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60700 00:10:15.885 [2024-11-20 07:06:57.988507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.263 07:06:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:17.263 00:10:17.263 real 0m3.982s 00:10:17.263 user 0m4.631s 00:10:17.263 sys 0m1.032s 00:10:17.263 07:06:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.263 07:06:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:17.263 ************************************ 00:10:17.263 END TEST raid_function_test_concat 00:10:17.263 ************************************ 00:10:17.263 07:06:59 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:17.263 07:06:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.263 07:06:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.263 07:06:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.263 ************************************ 00:10:17.263 START TEST raid0_resize_test 00:10:17.263 ************************************ 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60829 00:10:17.263 Process raid pid: 60829 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60829' 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60829 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60829 ']' 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.263 07:06:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.263 [2024-11-20 07:06:59.312414] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:17.264 [2024-11-20 07:06:59.312550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.264 [2024-11-20 07:06:59.475734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.523 [2024-11-20 07:06:59.629411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.782 [2024-11-20 07:06:59.879987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.782 [2024-11-20 07:06:59.880055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.042 Base_1 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.042 Base_2 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.042 [2024-11-20 07:07:00.251844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:18.042 [2024-11-20 07:07:00.253988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:18.042 [2024-11-20 07:07:00.254057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:18.042 [2024-11-20 07:07:00.254070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:18.042 [2024-11-20 07:07:00.254361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:18.042 [2024-11-20 07:07:00.254511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:18.042 [2024-11-20 07:07:00.254525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:18.042 [2024-11-20 07:07:00.254705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.042 [2024-11-20 07:07:00.263783] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:18.042 [2024-11-20 07:07:00.263814] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:18.042 true 00:10:18.042 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.043 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:18.043 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:18.043 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.043 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.043 [2024-11-20 07:07:00.279945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.043 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.305 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.306 [2024-11-20 07:07:00.323762] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:18.306 [2024-11-20 07:07:00.323821] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:18.306 [2024-11-20 07:07:00.323854] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:18.306 true 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.306 [2024-11-20 07:07:00.340066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60829 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60829 ']' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60829 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60829 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.306 killing process with pid 60829 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60829' 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60829 00:10:18.306 [2024-11-20 07:07:00.418278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.306 07:07:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60829 00:10:18.306 [2024-11-20 07:07:00.418502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.306 [2024-11-20 07:07:00.418618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.306 [2024-11-20 07:07:00.418648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:18.306 [2024-11-20 07:07:00.444136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.694 07:07:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:19.694 00:10:19.694 real 0m2.378s 00:10:19.694 user 0m2.511s 00:10:19.694 sys 0m0.432s 00:10:19.694 07:07:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.694 07:07:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.694 ************************************ 00:10:19.694 END TEST raid0_resize_test 00:10:19.694 ************************************ 00:10:19.694 07:07:01 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:19.694 07:07:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.694 07:07:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.694 07:07:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.694 ************************************ 00:10:19.694 START TEST raid1_resize_test 00:10:19.694 ************************************ 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60891 00:10:19.694 Process raid pid: 60891 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60891' 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60891 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60891 ']' 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.694 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.695 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.695 07:07:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.695 [2024-11-20 07:07:01.765181] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:19.695 [2024-11-20 07:07:01.765352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.695 [2024-11-20 07:07:01.950362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.956 [2024-11-20 07:07:02.068447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.215 [2024-11-20 07:07:02.276375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.215 [2024-11-20 07:07:02.276421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 Base_1 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 Base_2 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 [2024-11-20 07:07:02.643627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:20.476 [2024-11-20 07:07:02.645527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:20.476 [2024-11-20 07:07:02.645606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.476 [2024-11-20 07:07:02.645627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:20.476 [2024-11-20 07:07:02.645921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:20.476 [2024-11-20 07:07:02.646094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.476 [2024-11-20 07:07:02.646112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:20.476 [2024-11-20 07:07:02.646306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 [2024-11-20 07:07:02.655622] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:20.476 [2024-11-20 07:07:02.655673] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:20.476 true 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 [2024-11-20 07:07:02.671751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 [2024-11-20 07:07:02.719522] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:20.476 [2024-11-20 07:07:02.719563] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:20.476 [2024-11-20 07:07:02.719593] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:20.476 true 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:20.476 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.476 [2024-11-20 07:07:02.735711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.736 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.736 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:20.736 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:20.736 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:20.736 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60891 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60891 ']' 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60891 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60891 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60891' 00:10:20.737 killing process with pid 60891 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60891 00:10:20.737 [2024-11-20 07:07:02.819780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.737 [2024-11-20 07:07:02.819890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.737 07:07:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60891 00:10:20.737 [2024-11-20 07:07:02.820409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.737 [2024-11-20 07:07:02.820435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:20.737 [2024-11-20 07:07:02.838229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.115 07:07:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:22.115 00:10:22.115 real 0m2.360s 00:10:22.115 user 0m2.495s 00:10:22.115 sys 0m0.379s 00:10:22.115 07:07:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.115 07:07:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.115 ************************************ 00:10:22.115 END TEST raid1_resize_test 00:10:22.115 ************************************ 00:10:22.115 07:07:04 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:22.115 07:07:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:22.115 07:07:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:22.115 07:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.115 07:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.115 07:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.115 ************************************ 00:10:22.115 START TEST raid_state_function_test 00:10:22.115 ************************************ 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:22.115 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60953 00:10:22.116 Process raid pid: 60953 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60953' 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60953 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60953 ']' 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.116 07:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.116 [2024-11-20 07:07:04.189608] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:22.116 [2024-11-20 07:07:04.189769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.116 [2024-11-20 07:07:04.367237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.375 [2024-11-20 07:07:04.489657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.636 [2024-11-20 07:07:04.711868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.636 [2024-11-20 07:07:04.711930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.895 [2024-11-20 07:07:05.074068] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.895 [2024-11-20 07:07:05.074122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.895 [2024-11-20 07:07:05.074134] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.895 [2024-11-20 07:07:05.074144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.895 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.896 "name": "Existed_Raid", 00:10:22.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.896 "strip_size_kb": 64, 00:10:22.896 "state": "configuring", 00:10:22.896 "raid_level": "raid0", 00:10:22.896 "superblock": false, 00:10:22.896 "num_base_bdevs": 2, 00:10:22.896 "num_base_bdevs_discovered": 0, 00:10:22.896 "num_base_bdevs_operational": 2, 00:10:22.896 "base_bdevs_list": [ 00:10:22.896 { 00:10:22.896 "name": "BaseBdev1", 00:10:22.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.896 "is_configured": false, 00:10:22.896 "data_offset": 0, 00:10:22.896 "data_size": 0 00:10:22.896 }, 00:10:22.896 { 00:10:22.896 "name": "BaseBdev2", 00:10:22.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.896 "is_configured": false, 00:10:22.896 "data_offset": 0, 00:10:22.896 "data_size": 0 00:10:22.896 } 00:10:22.896 ] 00:10:22.896 }' 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.896 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 [2024-11-20 07:07:05.517279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.464 [2024-11-20 07:07:05.517322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 [2024-11-20 07:07:05.525234] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.464 [2024-11-20 07:07:05.525273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.464 [2024-11-20 07:07:05.525298] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.464 [2024-11-20 07:07:05.525311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 [2024-11-20 07:07:05.573204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.464 BaseBdev1 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.464 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.464 [ 00:10:23.464 { 00:10:23.464 "name": "BaseBdev1", 00:10:23.464 "aliases": [ 00:10:23.464 "ebf1d62b-1507-4c79-b90a-9f80175ef51b" 00:10:23.464 ], 00:10:23.464 "product_name": "Malloc disk", 00:10:23.465 "block_size": 512, 00:10:23.465 "num_blocks": 65536, 00:10:23.465 "uuid": "ebf1d62b-1507-4c79-b90a-9f80175ef51b", 00:10:23.465 "assigned_rate_limits": { 00:10:23.465 "rw_ios_per_sec": 0, 00:10:23.465 "rw_mbytes_per_sec": 0, 00:10:23.465 "r_mbytes_per_sec": 0, 00:10:23.465 "w_mbytes_per_sec": 0 00:10:23.465 }, 00:10:23.465 "claimed": true, 00:10:23.465 "claim_type": "exclusive_write", 00:10:23.465 "zoned": false, 00:10:23.465 "supported_io_types": { 00:10:23.465 "read": true, 00:10:23.465 "write": true, 00:10:23.465 "unmap": true, 00:10:23.465 "flush": true, 00:10:23.465 "reset": true, 00:10:23.465 "nvme_admin": false, 00:10:23.465 "nvme_io": false, 00:10:23.465 "nvme_io_md": false, 00:10:23.465 "write_zeroes": true, 00:10:23.465 "zcopy": true, 00:10:23.465 "get_zone_info": false, 00:10:23.465 "zone_management": false, 00:10:23.465 "zone_append": false, 00:10:23.465 "compare": false, 00:10:23.465 "compare_and_write": false, 00:10:23.465 "abort": true, 00:10:23.465 "seek_hole": false, 00:10:23.465 "seek_data": false, 00:10:23.465 "copy": true, 00:10:23.465 "nvme_iov_md": false 00:10:23.465 }, 00:10:23.465 "memory_domains": [ 00:10:23.465 { 00:10:23.465 "dma_device_id": "system", 00:10:23.465 "dma_device_type": 1 00:10:23.465 }, 00:10:23.465 { 00:10:23.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.465 "dma_device_type": 2 00:10:23.465 } 00:10:23.465 ], 00:10:23.465 "driver_specific": {} 00:10:23.465 } 00:10:23.465 ] 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.465 "name": "Existed_Raid", 00:10:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.465 "strip_size_kb": 64, 00:10:23.465 "state": "configuring", 00:10:23.465 "raid_level": "raid0", 00:10:23.465 "superblock": false, 00:10:23.465 "num_base_bdevs": 2, 00:10:23.465 "num_base_bdevs_discovered": 1, 00:10:23.465 "num_base_bdevs_operational": 2, 00:10:23.465 "base_bdevs_list": [ 00:10:23.465 { 00:10:23.465 "name": "BaseBdev1", 00:10:23.465 "uuid": "ebf1d62b-1507-4c79-b90a-9f80175ef51b", 00:10:23.465 "is_configured": true, 00:10:23.465 "data_offset": 0, 00:10:23.465 "data_size": 65536 00:10:23.465 }, 00:10:23.465 { 00:10:23.465 "name": "BaseBdev2", 00:10:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.465 "is_configured": false, 00:10:23.465 "data_offset": 0, 00:10:23.465 "data_size": 0 00:10:23.465 } 00:10:23.465 ] 00:10:23.465 }' 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.465 07:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.034 [2024-11-20 07:07:06.092405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.034 [2024-11-20 07:07:06.092468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.034 [2024-11-20 07:07:06.100418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.034 [2024-11-20 07:07:06.102542] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.034 [2024-11-20 07:07:06.102586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.034 "name": "Existed_Raid", 00:10:24.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.034 "strip_size_kb": 64, 00:10:24.034 "state": "configuring", 00:10:24.034 "raid_level": "raid0", 00:10:24.034 "superblock": false, 00:10:24.034 "num_base_bdevs": 2, 00:10:24.034 "num_base_bdevs_discovered": 1, 00:10:24.034 "num_base_bdevs_operational": 2, 00:10:24.034 "base_bdevs_list": [ 00:10:24.034 { 00:10:24.034 "name": "BaseBdev1", 00:10:24.034 "uuid": "ebf1d62b-1507-4c79-b90a-9f80175ef51b", 00:10:24.034 "is_configured": true, 00:10:24.034 "data_offset": 0, 00:10:24.034 "data_size": 65536 00:10:24.034 }, 00:10:24.034 { 00:10:24.034 "name": "BaseBdev2", 00:10:24.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.034 "is_configured": false, 00:10:24.034 "data_offset": 0, 00:10:24.034 "data_size": 0 00:10:24.034 } 00:10:24.034 ] 00:10:24.034 }' 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.034 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.604 [2024-11-20 07:07:06.605819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.604 [2024-11-20 07:07:06.605873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.604 [2024-11-20 07:07:06.605885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:24.604 [2024-11-20 07:07:06.606181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.604 [2024-11-20 07:07:06.606398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.604 [2024-11-20 07:07:06.606425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.604 [2024-11-20 07:07:06.606726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.604 BaseBdev2 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.604 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.605 [ 00:10:24.605 { 00:10:24.605 "name": "BaseBdev2", 00:10:24.605 "aliases": [ 00:10:24.605 "5b422636-26b5-4e1a-8edc-ee33db71d373" 00:10:24.605 ], 00:10:24.605 "product_name": "Malloc disk", 00:10:24.605 "block_size": 512, 00:10:24.605 "num_blocks": 65536, 00:10:24.605 "uuid": "5b422636-26b5-4e1a-8edc-ee33db71d373", 00:10:24.605 "assigned_rate_limits": { 00:10:24.605 "rw_ios_per_sec": 0, 00:10:24.605 "rw_mbytes_per_sec": 0, 00:10:24.605 "r_mbytes_per_sec": 0, 00:10:24.605 "w_mbytes_per_sec": 0 00:10:24.605 }, 00:10:24.605 "claimed": true, 00:10:24.605 "claim_type": "exclusive_write", 00:10:24.605 "zoned": false, 00:10:24.605 "supported_io_types": { 00:10:24.605 "read": true, 00:10:24.605 "write": true, 00:10:24.605 "unmap": true, 00:10:24.605 "flush": true, 00:10:24.605 "reset": true, 00:10:24.605 "nvme_admin": false, 00:10:24.605 "nvme_io": false, 00:10:24.605 "nvme_io_md": false, 00:10:24.605 "write_zeroes": true, 00:10:24.605 "zcopy": true, 00:10:24.605 "get_zone_info": false, 00:10:24.605 "zone_management": false, 00:10:24.605 "zone_append": false, 00:10:24.605 "compare": false, 00:10:24.605 "compare_and_write": false, 00:10:24.605 "abort": true, 00:10:24.605 "seek_hole": false, 00:10:24.605 "seek_data": false, 00:10:24.605 "copy": true, 00:10:24.605 "nvme_iov_md": false 00:10:24.605 }, 00:10:24.605 "memory_domains": [ 00:10:24.605 { 00:10:24.605 "dma_device_id": "system", 00:10:24.605 "dma_device_type": 1 00:10:24.605 }, 00:10:24.605 { 00:10:24.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.605 "dma_device_type": 2 00:10:24.605 } 00:10:24.605 ], 00:10:24.605 "driver_specific": {} 00:10:24.605 } 00:10:24.605 ] 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.605 "name": "Existed_Raid", 00:10:24.605 "uuid": "3cc6e3fb-de5f-4262-85a4-80b594aaafaa", 00:10:24.605 "strip_size_kb": 64, 00:10:24.605 "state": "online", 00:10:24.605 "raid_level": "raid0", 00:10:24.605 "superblock": false, 00:10:24.605 "num_base_bdevs": 2, 00:10:24.605 "num_base_bdevs_discovered": 2, 00:10:24.605 "num_base_bdevs_operational": 2, 00:10:24.605 "base_bdevs_list": [ 00:10:24.605 { 00:10:24.605 "name": "BaseBdev1", 00:10:24.605 "uuid": "ebf1d62b-1507-4c79-b90a-9f80175ef51b", 00:10:24.605 "is_configured": true, 00:10:24.605 "data_offset": 0, 00:10:24.605 "data_size": 65536 00:10:24.605 }, 00:10:24.605 { 00:10:24.605 "name": "BaseBdev2", 00:10:24.605 "uuid": "5b422636-26b5-4e1a-8edc-ee33db71d373", 00:10:24.605 "is_configured": true, 00:10:24.605 "data_offset": 0, 00:10:24.605 "data_size": 65536 00:10:24.605 } 00:10:24.605 ] 00:10:24.605 }' 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.605 07:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.866 [2024-11-20 07:07:07.117395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.866 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.126 "name": "Existed_Raid", 00:10:25.126 "aliases": [ 00:10:25.126 "3cc6e3fb-de5f-4262-85a4-80b594aaafaa" 00:10:25.126 ], 00:10:25.126 "product_name": "Raid Volume", 00:10:25.126 "block_size": 512, 00:10:25.126 "num_blocks": 131072, 00:10:25.126 "uuid": "3cc6e3fb-de5f-4262-85a4-80b594aaafaa", 00:10:25.126 "assigned_rate_limits": { 00:10:25.126 "rw_ios_per_sec": 0, 00:10:25.126 "rw_mbytes_per_sec": 0, 00:10:25.126 "r_mbytes_per_sec": 0, 00:10:25.126 "w_mbytes_per_sec": 0 00:10:25.126 }, 00:10:25.126 "claimed": false, 00:10:25.126 "zoned": false, 00:10:25.126 "supported_io_types": { 00:10:25.126 "read": true, 00:10:25.126 "write": true, 00:10:25.126 "unmap": true, 00:10:25.126 "flush": true, 00:10:25.126 "reset": true, 00:10:25.126 "nvme_admin": false, 00:10:25.126 "nvme_io": false, 00:10:25.126 "nvme_io_md": false, 00:10:25.126 "write_zeroes": true, 00:10:25.126 "zcopy": false, 00:10:25.126 "get_zone_info": false, 00:10:25.126 "zone_management": false, 00:10:25.126 "zone_append": false, 00:10:25.126 "compare": false, 00:10:25.126 "compare_and_write": false, 00:10:25.126 "abort": false, 00:10:25.126 "seek_hole": false, 00:10:25.126 "seek_data": false, 00:10:25.126 "copy": false, 00:10:25.126 "nvme_iov_md": false 00:10:25.126 }, 00:10:25.126 "memory_domains": [ 00:10:25.126 { 00:10:25.126 "dma_device_id": "system", 00:10:25.126 "dma_device_type": 1 00:10:25.126 }, 00:10:25.126 { 00:10:25.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.126 "dma_device_type": 2 00:10:25.126 }, 00:10:25.126 { 00:10:25.126 "dma_device_id": "system", 00:10:25.126 "dma_device_type": 1 00:10:25.126 }, 00:10:25.126 { 00:10:25.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.126 "dma_device_type": 2 00:10:25.126 } 00:10:25.126 ], 00:10:25.126 "driver_specific": { 00:10:25.126 "raid": { 00:10:25.126 "uuid": "3cc6e3fb-de5f-4262-85a4-80b594aaafaa", 00:10:25.126 "strip_size_kb": 64, 00:10:25.126 "state": "online", 00:10:25.126 "raid_level": "raid0", 00:10:25.126 "superblock": false, 00:10:25.126 "num_base_bdevs": 2, 00:10:25.126 "num_base_bdevs_discovered": 2, 00:10:25.126 "num_base_bdevs_operational": 2, 00:10:25.126 "base_bdevs_list": [ 00:10:25.126 { 00:10:25.126 "name": "BaseBdev1", 00:10:25.126 "uuid": "ebf1d62b-1507-4c79-b90a-9f80175ef51b", 00:10:25.126 "is_configured": true, 00:10:25.126 "data_offset": 0, 00:10:25.126 "data_size": 65536 00:10:25.126 }, 00:10:25.126 { 00:10:25.126 "name": "BaseBdev2", 00:10:25.126 "uuid": "5b422636-26b5-4e1a-8edc-ee33db71d373", 00:10:25.126 "is_configured": true, 00:10:25.126 "data_offset": 0, 00:10:25.126 "data_size": 65536 00:10:25.126 } 00:10:25.126 ] 00:10:25.126 } 00:10:25.126 } 00:10:25.126 }' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.126 BaseBdev2' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.126 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.127 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.127 [2024-11-20 07:07:07.372670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.127 [2024-11-20 07:07:07.372712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.127 [2024-11-20 07:07:07.372787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.391 "name": "Existed_Raid", 00:10:25.391 "uuid": "3cc6e3fb-de5f-4262-85a4-80b594aaafaa", 00:10:25.391 "strip_size_kb": 64, 00:10:25.391 "state": "offline", 00:10:25.391 "raid_level": "raid0", 00:10:25.391 "superblock": false, 00:10:25.391 "num_base_bdevs": 2, 00:10:25.391 "num_base_bdevs_discovered": 1, 00:10:25.391 "num_base_bdevs_operational": 1, 00:10:25.391 "base_bdevs_list": [ 00:10:25.391 { 00:10:25.391 "name": null, 00:10:25.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.391 "is_configured": false, 00:10:25.391 "data_offset": 0, 00:10:25.391 "data_size": 65536 00:10:25.391 }, 00:10:25.391 { 00:10:25.391 "name": "BaseBdev2", 00:10:25.391 "uuid": "5b422636-26b5-4e1a-8edc-ee33db71d373", 00:10:25.391 "is_configured": true, 00:10:25.391 "data_offset": 0, 00:10:25.391 "data_size": 65536 00:10:25.391 } 00:10:25.391 ] 00:10:25.391 }' 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.391 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.966 07:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.966 [2024-11-20 07:07:07.980552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.966 [2024-11-20 07:07:07.980645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60953 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60953 ']' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60953 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60953 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.966 killing process with pid 60953 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60953' 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60953 00:10:25.966 [2024-11-20 07:07:08.179662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.966 07:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60953 00:10:25.966 [2024-11-20 07:07:08.196993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:27.347 00:10:27.347 real 0m5.254s 00:10:27.347 user 0m7.667s 00:10:27.347 sys 0m0.829s 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.347 ************************************ 00:10:27.347 END TEST raid_state_function_test 00:10:27.347 ************************************ 00:10:27.347 07:07:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:27.347 07:07:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.347 07:07:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.347 07:07:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.347 ************************************ 00:10:27.347 START TEST raid_state_function_test_sb 00:10:27.347 ************************************ 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61206 00:10:27.347 Process raid pid: 61206 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61206' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61206 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61206 ']' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.347 07:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.347 [2024-11-20 07:07:09.512775] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:27.347 [2024-11-20 07:07:09.512889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.607 [2024-11-20 07:07:09.671456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.607 [2024-11-20 07:07:09.798483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.866 [2024-11-20 07:07:10.011367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.866 [2024-11-20 07:07:10.011435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.125 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.125 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:28.125 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:28.125 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.125 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.432 [2024-11-20 07:07:10.394857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.432 [2024-11-20 07:07:10.394908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.432 [2024-11-20 07:07:10.394918] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.432 [2024-11-20 07:07:10.394928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.432 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.433 "name": "Existed_Raid", 00:10:28.433 "uuid": "6d53de47-8356-4fde-84bd-4438e9ed9538", 00:10:28.433 "strip_size_kb": 64, 00:10:28.433 "state": "configuring", 00:10:28.433 "raid_level": "raid0", 00:10:28.433 "superblock": true, 00:10:28.433 "num_base_bdevs": 2, 00:10:28.433 "num_base_bdevs_discovered": 0, 00:10:28.433 "num_base_bdevs_operational": 2, 00:10:28.433 "base_bdevs_list": [ 00:10:28.433 { 00:10:28.433 "name": "BaseBdev1", 00:10:28.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.433 "is_configured": false, 00:10:28.433 "data_offset": 0, 00:10:28.433 "data_size": 0 00:10:28.433 }, 00:10:28.433 { 00:10:28.433 "name": "BaseBdev2", 00:10:28.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.433 "is_configured": false, 00:10:28.433 "data_offset": 0, 00:10:28.433 "data_size": 0 00:10:28.433 } 00:10:28.433 ] 00:10:28.433 }' 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.433 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 [2024-11-20 07:07:10.802152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.706 [2024-11-20 07:07:10.802200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 [2024-11-20 07:07:10.814135] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.706 [2024-11-20 07:07:10.814182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.706 [2024-11-20 07:07:10.814192] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.706 [2024-11-20 07:07:10.814205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 [2024-11-20 07:07:10.866258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.706 BaseBdev1 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.706 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.706 [ 00:10:28.706 { 00:10:28.706 "name": "BaseBdev1", 00:10:28.706 "aliases": [ 00:10:28.706 "196d1b82-2886-4367-a109-0ad901dabca2" 00:10:28.706 ], 00:10:28.706 "product_name": "Malloc disk", 00:10:28.706 "block_size": 512, 00:10:28.706 "num_blocks": 65536, 00:10:28.706 "uuid": "196d1b82-2886-4367-a109-0ad901dabca2", 00:10:28.706 "assigned_rate_limits": { 00:10:28.706 "rw_ios_per_sec": 0, 00:10:28.706 "rw_mbytes_per_sec": 0, 00:10:28.706 "r_mbytes_per_sec": 0, 00:10:28.706 "w_mbytes_per_sec": 0 00:10:28.706 }, 00:10:28.706 "claimed": true, 00:10:28.706 "claim_type": "exclusive_write", 00:10:28.706 "zoned": false, 00:10:28.706 "supported_io_types": { 00:10:28.706 "read": true, 00:10:28.706 "write": true, 00:10:28.706 "unmap": true, 00:10:28.706 "flush": true, 00:10:28.706 "reset": true, 00:10:28.706 "nvme_admin": false, 00:10:28.706 "nvme_io": false, 00:10:28.706 "nvme_io_md": false, 00:10:28.706 "write_zeroes": true, 00:10:28.706 "zcopy": true, 00:10:28.706 "get_zone_info": false, 00:10:28.706 "zone_management": false, 00:10:28.706 "zone_append": false, 00:10:28.706 "compare": false, 00:10:28.706 "compare_and_write": false, 00:10:28.706 "abort": true, 00:10:28.706 "seek_hole": false, 00:10:28.706 "seek_data": false, 00:10:28.706 "copy": true, 00:10:28.706 "nvme_iov_md": false 00:10:28.706 }, 00:10:28.706 "memory_domains": [ 00:10:28.706 { 00:10:28.706 "dma_device_id": "system", 00:10:28.706 "dma_device_type": 1 00:10:28.706 }, 00:10:28.707 { 00:10:28.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.707 "dma_device_type": 2 00:10:28.707 } 00:10:28.707 ], 00:10:28.707 "driver_specific": {} 00:10:28.707 } 00:10:28.707 ] 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.707 "name": "Existed_Raid", 00:10:28.707 "uuid": "74354b16-2532-44e5-b282-2a2618e49341", 00:10:28.707 "strip_size_kb": 64, 00:10:28.707 "state": "configuring", 00:10:28.707 "raid_level": "raid0", 00:10:28.707 "superblock": true, 00:10:28.707 "num_base_bdevs": 2, 00:10:28.707 "num_base_bdevs_discovered": 1, 00:10:28.707 "num_base_bdevs_operational": 2, 00:10:28.707 "base_bdevs_list": [ 00:10:28.707 { 00:10:28.707 "name": "BaseBdev1", 00:10:28.707 "uuid": "196d1b82-2886-4367-a109-0ad901dabca2", 00:10:28.707 "is_configured": true, 00:10:28.707 "data_offset": 2048, 00:10:28.707 "data_size": 63488 00:10:28.707 }, 00:10:28.707 { 00:10:28.707 "name": "BaseBdev2", 00:10:28.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.707 "is_configured": false, 00:10:28.707 "data_offset": 0, 00:10:28.707 "data_size": 0 00:10:28.707 } 00:10:28.707 ] 00:10:28.707 }' 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.707 07:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.274 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.274 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.274 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.274 [2024-11-20 07:07:11.329644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.274 [2024-11-20 07:07:11.329729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.275 [2024-11-20 07:07:11.341658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.275 [2024-11-20 07:07:11.343887] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.275 [2024-11-20 07:07:11.343934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.275 "name": "Existed_Raid", 00:10:29.275 "uuid": "d71e8715-b974-4b1e-a5ba-22206d20e6fd", 00:10:29.275 "strip_size_kb": 64, 00:10:29.275 "state": "configuring", 00:10:29.275 "raid_level": "raid0", 00:10:29.275 "superblock": true, 00:10:29.275 "num_base_bdevs": 2, 00:10:29.275 "num_base_bdevs_discovered": 1, 00:10:29.275 "num_base_bdevs_operational": 2, 00:10:29.275 "base_bdevs_list": [ 00:10:29.275 { 00:10:29.275 "name": "BaseBdev1", 00:10:29.275 "uuid": "196d1b82-2886-4367-a109-0ad901dabca2", 00:10:29.275 "is_configured": true, 00:10:29.275 "data_offset": 2048, 00:10:29.275 "data_size": 63488 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "name": "BaseBdev2", 00:10:29.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.275 "is_configured": false, 00:10:29.275 "data_offset": 0, 00:10:29.275 "data_size": 0 00:10:29.275 } 00:10:29.275 ] 00:10:29.275 }' 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.275 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.534 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.534 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.534 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.793 [2024-11-20 07:07:11.820179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.793 [2024-11-20 07:07:11.820555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.793 [2024-11-20 07:07:11.820577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:29.793 [2024-11-20 07:07:11.820996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:29.793 BaseBdev2 00:10:29.793 [2024-11-20 07:07:11.821202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.793 [2024-11-20 07:07:11.821220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:29.793 [2024-11-20 07:07:11.821413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.793 [ 00:10:29.793 { 00:10:29.793 "name": "BaseBdev2", 00:10:29.793 "aliases": [ 00:10:29.793 "bb16a93c-de31-48f4-8edf-ba2d14b56b01" 00:10:29.793 ], 00:10:29.793 "product_name": "Malloc disk", 00:10:29.793 "block_size": 512, 00:10:29.793 "num_blocks": 65536, 00:10:29.793 "uuid": "bb16a93c-de31-48f4-8edf-ba2d14b56b01", 00:10:29.793 "assigned_rate_limits": { 00:10:29.793 "rw_ios_per_sec": 0, 00:10:29.793 "rw_mbytes_per_sec": 0, 00:10:29.793 "r_mbytes_per_sec": 0, 00:10:29.793 "w_mbytes_per_sec": 0 00:10:29.793 }, 00:10:29.793 "claimed": true, 00:10:29.793 "claim_type": "exclusive_write", 00:10:29.793 "zoned": false, 00:10:29.793 "supported_io_types": { 00:10:29.793 "read": true, 00:10:29.793 "write": true, 00:10:29.793 "unmap": true, 00:10:29.793 "flush": true, 00:10:29.793 "reset": true, 00:10:29.793 "nvme_admin": false, 00:10:29.793 "nvme_io": false, 00:10:29.793 "nvme_io_md": false, 00:10:29.793 "write_zeroes": true, 00:10:29.793 "zcopy": true, 00:10:29.793 "get_zone_info": false, 00:10:29.793 "zone_management": false, 00:10:29.793 "zone_append": false, 00:10:29.793 "compare": false, 00:10:29.793 "compare_and_write": false, 00:10:29.793 "abort": true, 00:10:29.793 "seek_hole": false, 00:10:29.793 "seek_data": false, 00:10:29.793 "copy": true, 00:10:29.793 "nvme_iov_md": false 00:10:29.793 }, 00:10:29.793 "memory_domains": [ 00:10:29.793 { 00:10:29.793 "dma_device_id": "system", 00:10:29.793 "dma_device_type": 1 00:10:29.793 }, 00:10:29.793 { 00:10:29.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.793 "dma_device_type": 2 00:10:29.793 } 00:10:29.793 ], 00:10:29.793 "driver_specific": {} 00:10:29.793 } 00:10:29.793 ] 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.793 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.794 "name": "Existed_Raid", 00:10:29.794 "uuid": "d71e8715-b974-4b1e-a5ba-22206d20e6fd", 00:10:29.794 "strip_size_kb": 64, 00:10:29.794 "state": "online", 00:10:29.794 "raid_level": "raid0", 00:10:29.794 "superblock": true, 00:10:29.794 "num_base_bdevs": 2, 00:10:29.794 "num_base_bdevs_discovered": 2, 00:10:29.794 "num_base_bdevs_operational": 2, 00:10:29.794 "base_bdevs_list": [ 00:10:29.794 { 00:10:29.794 "name": "BaseBdev1", 00:10:29.794 "uuid": "196d1b82-2886-4367-a109-0ad901dabca2", 00:10:29.794 "is_configured": true, 00:10:29.794 "data_offset": 2048, 00:10:29.794 "data_size": 63488 00:10:29.794 }, 00:10:29.794 { 00:10:29.794 "name": "BaseBdev2", 00:10:29.794 "uuid": "bb16a93c-de31-48f4-8edf-ba2d14b56b01", 00:10:29.794 "is_configured": true, 00:10:29.794 "data_offset": 2048, 00:10:29.794 "data_size": 63488 00:10:29.794 } 00:10:29.794 ] 00:10:29.794 }' 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.794 07:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.053 [2024-11-20 07:07:12.295863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.053 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.312 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.312 "name": "Existed_Raid", 00:10:30.312 "aliases": [ 00:10:30.312 "d71e8715-b974-4b1e-a5ba-22206d20e6fd" 00:10:30.312 ], 00:10:30.312 "product_name": "Raid Volume", 00:10:30.312 "block_size": 512, 00:10:30.312 "num_blocks": 126976, 00:10:30.312 "uuid": "d71e8715-b974-4b1e-a5ba-22206d20e6fd", 00:10:30.312 "assigned_rate_limits": { 00:10:30.312 "rw_ios_per_sec": 0, 00:10:30.312 "rw_mbytes_per_sec": 0, 00:10:30.312 "r_mbytes_per_sec": 0, 00:10:30.312 "w_mbytes_per_sec": 0 00:10:30.312 }, 00:10:30.312 "claimed": false, 00:10:30.312 "zoned": false, 00:10:30.312 "supported_io_types": { 00:10:30.312 "read": true, 00:10:30.312 "write": true, 00:10:30.312 "unmap": true, 00:10:30.312 "flush": true, 00:10:30.312 "reset": true, 00:10:30.312 "nvme_admin": false, 00:10:30.312 "nvme_io": false, 00:10:30.312 "nvme_io_md": false, 00:10:30.312 "write_zeroes": true, 00:10:30.312 "zcopy": false, 00:10:30.312 "get_zone_info": false, 00:10:30.312 "zone_management": false, 00:10:30.312 "zone_append": false, 00:10:30.312 "compare": false, 00:10:30.312 "compare_and_write": false, 00:10:30.312 "abort": false, 00:10:30.312 "seek_hole": false, 00:10:30.312 "seek_data": false, 00:10:30.312 "copy": false, 00:10:30.312 "nvme_iov_md": false 00:10:30.312 }, 00:10:30.312 "memory_domains": [ 00:10:30.312 { 00:10:30.312 "dma_device_id": "system", 00:10:30.312 "dma_device_type": 1 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.312 "dma_device_type": 2 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "dma_device_id": "system", 00:10:30.312 "dma_device_type": 1 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.312 "dma_device_type": 2 00:10:30.312 } 00:10:30.312 ], 00:10:30.312 "driver_specific": { 00:10:30.312 "raid": { 00:10:30.312 "uuid": "d71e8715-b974-4b1e-a5ba-22206d20e6fd", 00:10:30.312 "strip_size_kb": 64, 00:10:30.312 "state": "online", 00:10:30.312 "raid_level": "raid0", 00:10:30.312 "superblock": true, 00:10:30.312 "num_base_bdevs": 2, 00:10:30.312 "num_base_bdevs_discovered": 2, 00:10:30.312 "num_base_bdevs_operational": 2, 00:10:30.312 "base_bdevs_list": [ 00:10:30.312 { 00:10:30.312 "name": "BaseBdev1", 00:10:30.312 "uuid": "196d1b82-2886-4367-a109-0ad901dabca2", 00:10:30.312 "is_configured": true, 00:10:30.312 "data_offset": 2048, 00:10:30.312 "data_size": 63488 00:10:30.312 }, 00:10:30.312 { 00:10:30.312 "name": "BaseBdev2", 00:10:30.312 "uuid": "bb16a93c-de31-48f4-8edf-ba2d14b56b01", 00:10:30.312 "is_configured": true, 00:10:30.312 "data_offset": 2048, 00:10:30.312 "data_size": 63488 00:10:30.312 } 00:10:30.312 ] 00:10:30.312 } 00:10:30.312 } 00:10:30.313 }' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:30.313 BaseBdev2' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.313 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 [2024-11-20 07:07:12.547192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.313 [2024-11-20 07:07:12.547263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.313 [2024-11-20 07:07:12.547334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.572 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.572 "name": "Existed_Raid", 00:10:30.572 "uuid": "d71e8715-b974-4b1e-a5ba-22206d20e6fd", 00:10:30.572 "strip_size_kb": 64, 00:10:30.572 "state": "offline", 00:10:30.572 "raid_level": "raid0", 00:10:30.572 "superblock": true, 00:10:30.572 "num_base_bdevs": 2, 00:10:30.572 "num_base_bdevs_discovered": 1, 00:10:30.572 "num_base_bdevs_operational": 1, 00:10:30.572 "base_bdevs_list": [ 00:10:30.572 { 00:10:30.572 "name": null, 00:10:30.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.572 "is_configured": false, 00:10:30.572 "data_offset": 0, 00:10:30.572 "data_size": 63488 00:10:30.572 }, 00:10:30.572 { 00:10:30.572 "name": "BaseBdev2", 00:10:30.572 "uuid": "bb16a93c-de31-48f4-8edf-ba2d14b56b01", 00:10:30.572 "is_configured": true, 00:10:30.572 "data_offset": 2048, 00:10:30.572 "data_size": 63488 00:10:30.572 } 00:10:30.572 ] 00:10:30.572 }' 00:10:30.573 07:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.573 07:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.143 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 [2024-11-20 07:07:13.189832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.144 [2024-11-20 07:07:13.190049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61206 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61206 ']' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61206 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61206 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61206' 00:10:31.144 killing process with pid 61206 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61206 00:10:31.144 [2024-11-20 07:07:13.406817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.144 07:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61206 00:10:31.403 [2024-11-20 07:07:13.426526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.783 07:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:32.783 00:10:32.783 real 0m5.271s 00:10:32.783 user 0m7.494s 00:10:32.783 sys 0m0.842s 00:10:32.783 07:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.783 ************************************ 00:10:32.783 END TEST raid_state_function_test_sb 00:10:32.783 ************************************ 00:10:32.783 07:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 07:07:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:32.783 07:07:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.783 07:07:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.783 07:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 ************************************ 00:10:32.784 START TEST raid_superblock_test 00:10:32.784 ************************************ 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61458 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61458 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61458 ']' 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.784 07:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.784 [2024-11-20 07:07:14.872786] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:32.784 [2024-11-20 07:07:14.873062] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61458 ] 00:10:33.043 [2024-11-20 07:07:15.058615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.043 [2024-11-20 07:07:15.210572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.302 [2024-11-20 07:07:15.456452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.302 [2024-11-20 07:07:15.456678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 malloc1 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 [2024-11-20 07:07:15.762312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.562 [2024-11-20 07:07:15.762526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.562 [2024-11-20 07:07:15.762579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:33.562 [2024-11-20 07:07:15.762624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.562 [2024-11-20 07:07:15.765392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.562 [2024-11-20 07:07:15.765496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.562 pt1 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 malloc2 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.562 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.822 [2024-11-20 07:07:15.826418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.822 [2024-11-20 07:07:15.826624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.822 [2024-11-20 07:07:15.826689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:33.822 [2024-11-20 07:07:15.826731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.822 [2024-11-20 07:07:15.829763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.822 [2024-11-20 07:07:15.829881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.822 pt2 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.822 [2024-11-20 07:07:15.838737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.822 [2024-11-20 07:07:15.841258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.822 [2024-11-20 07:07:15.841589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:33.822 [2024-11-20 07:07:15.841659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:33.822 [2024-11-20 07:07:15.842061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:33.822 [2024-11-20 07:07:15.842321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:33.822 [2024-11-20 07:07:15.842394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:33.822 [2024-11-20 07:07:15.842701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.822 "name": "raid_bdev1", 00:10:33.822 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:33.822 "strip_size_kb": 64, 00:10:33.822 "state": "online", 00:10:33.822 "raid_level": "raid0", 00:10:33.822 "superblock": true, 00:10:33.822 "num_base_bdevs": 2, 00:10:33.822 "num_base_bdevs_discovered": 2, 00:10:33.822 "num_base_bdevs_operational": 2, 00:10:33.822 "base_bdevs_list": [ 00:10:33.822 { 00:10:33.822 "name": "pt1", 00:10:33.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.822 "is_configured": true, 00:10:33.822 "data_offset": 2048, 00:10:33.822 "data_size": 63488 00:10:33.822 }, 00:10:33.822 { 00:10:33.822 "name": "pt2", 00:10:33.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.822 "is_configured": true, 00:10:33.822 "data_offset": 2048, 00:10:33.822 "data_size": 63488 00:10:33.822 } 00:10:33.822 ] 00:10:33.822 }' 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.822 07:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.082 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.082 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.083 [2024-11-20 07:07:16.270484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.083 "name": "raid_bdev1", 00:10:34.083 "aliases": [ 00:10:34.083 "c64c3772-51e9-44fd-8f8a-6eb75ebd476a" 00:10:34.083 ], 00:10:34.083 "product_name": "Raid Volume", 00:10:34.083 "block_size": 512, 00:10:34.083 "num_blocks": 126976, 00:10:34.083 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:34.083 "assigned_rate_limits": { 00:10:34.083 "rw_ios_per_sec": 0, 00:10:34.083 "rw_mbytes_per_sec": 0, 00:10:34.083 "r_mbytes_per_sec": 0, 00:10:34.083 "w_mbytes_per_sec": 0 00:10:34.083 }, 00:10:34.083 "claimed": false, 00:10:34.083 "zoned": false, 00:10:34.083 "supported_io_types": { 00:10:34.083 "read": true, 00:10:34.083 "write": true, 00:10:34.083 "unmap": true, 00:10:34.083 "flush": true, 00:10:34.083 "reset": true, 00:10:34.083 "nvme_admin": false, 00:10:34.083 "nvme_io": false, 00:10:34.083 "nvme_io_md": false, 00:10:34.083 "write_zeroes": true, 00:10:34.083 "zcopy": false, 00:10:34.083 "get_zone_info": false, 00:10:34.083 "zone_management": false, 00:10:34.083 "zone_append": false, 00:10:34.083 "compare": false, 00:10:34.083 "compare_and_write": false, 00:10:34.083 "abort": false, 00:10:34.083 "seek_hole": false, 00:10:34.083 "seek_data": false, 00:10:34.083 "copy": false, 00:10:34.083 "nvme_iov_md": false 00:10:34.083 }, 00:10:34.083 "memory_domains": [ 00:10:34.083 { 00:10:34.083 "dma_device_id": "system", 00:10:34.083 "dma_device_type": 1 00:10:34.083 }, 00:10:34.083 { 00:10:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.083 "dma_device_type": 2 00:10:34.083 }, 00:10:34.083 { 00:10:34.083 "dma_device_id": "system", 00:10:34.083 "dma_device_type": 1 00:10:34.083 }, 00:10:34.083 { 00:10:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.083 "dma_device_type": 2 00:10:34.083 } 00:10:34.083 ], 00:10:34.083 "driver_specific": { 00:10:34.083 "raid": { 00:10:34.083 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:34.083 "strip_size_kb": 64, 00:10:34.083 "state": "online", 00:10:34.083 "raid_level": "raid0", 00:10:34.083 "superblock": true, 00:10:34.083 "num_base_bdevs": 2, 00:10:34.083 "num_base_bdevs_discovered": 2, 00:10:34.083 "num_base_bdevs_operational": 2, 00:10:34.083 "base_bdevs_list": [ 00:10:34.083 { 00:10:34.083 "name": "pt1", 00:10:34.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.083 "is_configured": true, 00:10:34.083 "data_offset": 2048, 00:10:34.083 "data_size": 63488 00:10:34.083 }, 00:10:34.083 { 00:10:34.083 "name": "pt2", 00:10:34.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.083 "is_configured": true, 00:10:34.083 "data_offset": 2048, 00:10:34.083 "data_size": 63488 00:10:34.083 } 00:10:34.083 ] 00:10:34.083 } 00:10:34.083 } 00:10:34.083 }' 00:10:34.083 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.343 pt2' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.343 [2024-11-20 07:07:16.506028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c64c3772-51e9-44fd-8f8a-6eb75ebd476a 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c64c3772-51e9-44fd-8f8a-6eb75ebd476a ']' 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.343 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.343 [2024-11-20 07:07:16.553639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.343 [2024-11-20 07:07:16.553790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.344 [2024-11-20 07:07:16.553973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.344 [2024-11-20 07:07:16.554085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.344 [2024-11-20 07:07:16.554152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.344 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 [2024-11-20 07:07:16.681520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:34.604 [2024-11-20 07:07:16.684025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:34.604 [2024-11-20 07:07:16.684139] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:34.604 [2024-11-20 07:07:16.684214] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:34.604 [2024-11-20 07:07:16.684233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.604 [2024-11-20 07:07:16.684251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:34.604 request: 00:10:34.604 { 00:10:34.604 "name": "raid_bdev1", 00:10:34.604 "raid_level": "raid0", 00:10:34.604 "base_bdevs": [ 00:10:34.604 "malloc1", 00:10:34.604 "malloc2" 00:10:34.604 ], 00:10:34.604 "strip_size_kb": 64, 00:10:34.604 "superblock": false, 00:10:34.604 "method": "bdev_raid_create", 00:10:34.604 "req_id": 1 00:10:34.604 } 00:10:34.604 Got JSON-RPC error response 00:10:34.604 response: 00:10:34.604 { 00:10:34.604 "code": -17, 00:10:34.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:34.604 } 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 [2024-11-20 07:07:16.737401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.604 [2024-11-20 07:07:16.737532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.604 [2024-11-20 07:07:16.737564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:34.604 [2024-11-20 07:07:16.737580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.604 [2024-11-20 07:07:16.740364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.604 [2024-11-20 07:07:16.740416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.604 [2024-11-20 07:07:16.740549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:34.604 [2024-11-20 07:07:16.740627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.604 pt1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.604 "name": "raid_bdev1", 00:10:34.604 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:34.604 "strip_size_kb": 64, 00:10:34.604 "state": "configuring", 00:10:34.604 "raid_level": "raid0", 00:10:34.604 "superblock": true, 00:10:34.604 "num_base_bdevs": 2, 00:10:34.604 "num_base_bdevs_discovered": 1, 00:10:34.604 "num_base_bdevs_operational": 2, 00:10:34.604 "base_bdevs_list": [ 00:10:34.604 { 00:10:34.604 "name": "pt1", 00:10:34.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.604 "is_configured": true, 00:10:34.604 "data_offset": 2048, 00:10:34.604 "data_size": 63488 00:10:34.604 }, 00:10:34.604 { 00:10:34.604 "name": null, 00:10:34.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.604 "is_configured": false, 00:10:34.604 "data_offset": 2048, 00:10:34.604 "data_size": 63488 00:10:34.604 } 00:10:34.604 ] 00:10:34.604 }' 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.604 07:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.171 [2024-11-20 07:07:17.180614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.171 [2024-11-20 07:07:17.180739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.171 [2024-11-20 07:07:17.180773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:35.171 [2024-11-20 07:07:17.180792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.171 [2024-11-20 07:07:17.181539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.171 [2024-11-20 07:07:17.181596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.171 [2024-11-20 07:07:17.181734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.171 [2024-11-20 07:07:17.181785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.171 [2024-11-20 07:07:17.181936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.171 [2024-11-20 07:07:17.181961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:35.171 [2024-11-20 07:07:17.182284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:35.171 [2024-11-20 07:07:17.182507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.171 [2024-11-20 07:07:17.182531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:35.171 [2024-11-20 07:07:17.182713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.171 pt2 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.171 "name": "raid_bdev1", 00:10:35.171 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:35.171 "strip_size_kb": 64, 00:10:35.171 "state": "online", 00:10:35.171 "raid_level": "raid0", 00:10:35.171 "superblock": true, 00:10:35.171 "num_base_bdevs": 2, 00:10:35.171 "num_base_bdevs_discovered": 2, 00:10:35.171 "num_base_bdevs_operational": 2, 00:10:35.171 "base_bdevs_list": [ 00:10:35.171 { 00:10:35.171 "name": "pt1", 00:10:35.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.171 "is_configured": true, 00:10:35.171 "data_offset": 2048, 00:10:35.171 "data_size": 63488 00:10:35.171 }, 00:10:35.171 { 00:10:35.171 "name": "pt2", 00:10:35.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.171 "is_configured": true, 00:10:35.171 "data_offset": 2048, 00:10:35.171 "data_size": 63488 00:10:35.171 } 00:10:35.171 ] 00:10:35.171 }' 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.171 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.428 [2024-11-20 07:07:17.576228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.428 "name": "raid_bdev1", 00:10:35.428 "aliases": [ 00:10:35.428 "c64c3772-51e9-44fd-8f8a-6eb75ebd476a" 00:10:35.428 ], 00:10:35.428 "product_name": "Raid Volume", 00:10:35.428 "block_size": 512, 00:10:35.428 "num_blocks": 126976, 00:10:35.428 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:35.428 "assigned_rate_limits": { 00:10:35.428 "rw_ios_per_sec": 0, 00:10:35.428 "rw_mbytes_per_sec": 0, 00:10:35.428 "r_mbytes_per_sec": 0, 00:10:35.428 "w_mbytes_per_sec": 0 00:10:35.428 }, 00:10:35.428 "claimed": false, 00:10:35.428 "zoned": false, 00:10:35.428 "supported_io_types": { 00:10:35.428 "read": true, 00:10:35.428 "write": true, 00:10:35.428 "unmap": true, 00:10:35.428 "flush": true, 00:10:35.428 "reset": true, 00:10:35.428 "nvme_admin": false, 00:10:35.428 "nvme_io": false, 00:10:35.428 "nvme_io_md": false, 00:10:35.428 "write_zeroes": true, 00:10:35.428 "zcopy": false, 00:10:35.428 "get_zone_info": false, 00:10:35.428 "zone_management": false, 00:10:35.428 "zone_append": false, 00:10:35.428 "compare": false, 00:10:35.428 "compare_and_write": false, 00:10:35.428 "abort": false, 00:10:35.428 "seek_hole": false, 00:10:35.428 "seek_data": false, 00:10:35.428 "copy": false, 00:10:35.428 "nvme_iov_md": false 00:10:35.428 }, 00:10:35.428 "memory_domains": [ 00:10:35.428 { 00:10:35.428 "dma_device_id": "system", 00:10:35.428 "dma_device_type": 1 00:10:35.428 }, 00:10:35.428 { 00:10:35.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.428 "dma_device_type": 2 00:10:35.428 }, 00:10:35.428 { 00:10:35.428 "dma_device_id": "system", 00:10:35.428 "dma_device_type": 1 00:10:35.428 }, 00:10:35.428 { 00:10:35.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.428 "dma_device_type": 2 00:10:35.428 } 00:10:35.428 ], 00:10:35.428 "driver_specific": { 00:10:35.428 "raid": { 00:10:35.428 "uuid": "c64c3772-51e9-44fd-8f8a-6eb75ebd476a", 00:10:35.428 "strip_size_kb": 64, 00:10:35.428 "state": "online", 00:10:35.428 "raid_level": "raid0", 00:10:35.428 "superblock": true, 00:10:35.428 "num_base_bdevs": 2, 00:10:35.428 "num_base_bdevs_discovered": 2, 00:10:35.428 "num_base_bdevs_operational": 2, 00:10:35.428 "base_bdevs_list": [ 00:10:35.428 { 00:10:35.428 "name": "pt1", 00:10:35.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.428 "is_configured": true, 00:10:35.428 "data_offset": 2048, 00:10:35.428 "data_size": 63488 00:10:35.428 }, 00:10:35.428 { 00:10:35.428 "name": "pt2", 00:10:35.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.428 "is_configured": true, 00:10:35.428 "data_offset": 2048, 00:10:35.428 "data_size": 63488 00:10:35.428 } 00:10:35.428 ] 00:10:35.428 } 00:10:35.428 } 00:10:35.428 }' 00:10:35.428 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.429 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:35.429 pt2' 00:10:35.429 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.687 [2024-11-20 07:07:17.799869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c64c3772-51e9-44fd-8f8a-6eb75ebd476a '!=' c64c3772-51e9-44fd-8f8a-6eb75ebd476a ']' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61458 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61458 ']' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61458 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61458 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.687 killing process with pid 61458 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61458' 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61458 00:10:35.687 [2024-11-20 07:07:17.864720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.687 07:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61458 00:10:35.687 [2024-11-20 07:07:17.864913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.687 [2024-11-20 07:07:17.865003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.687 [2024-11-20 07:07:17.865024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:35.945 [2024-11-20 07:07:18.132083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.844 07:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:37.844 00:10:37.844 real 0m4.826s 00:10:37.844 user 0m6.405s 00:10:37.844 sys 0m0.904s 00:10:37.844 07:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.844 07:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.844 ************************************ 00:10:37.844 END TEST raid_superblock_test 00:10:37.844 ************************************ 00:10:37.844 07:07:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:37.844 07:07:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.844 07:07:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.844 07:07:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.844 ************************************ 00:10:37.844 START TEST raid_read_error_test 00:10:37.844 ************************************ 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:37.844 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iproMslj64 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61670 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61670 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61670 ']' 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.845 07:07:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.845 [2024-11-20 07:07:19.767709] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:37.845 [2024-11-20 07:07:19.767847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61670 ] 00:10:37.845 [2024-11-20 07:07:19.945903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.845 [2024-11-20 07:07:20.103838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.436 [2024-11-20 07:07:20.383919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.436 [2024-11-20 07:07:20.383994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.436 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 BaseBdev1_malloc 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 true 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 [2024-11-20 07:07:20.739983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.694 [2024-11-20 07:07:20.740072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.694 [2024-11-20 07:07:20.740104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:38.694 [2024-11-20 07:07:20.740119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.694 [2024-11-20 07:07:20.743123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.694 [2024-11-20 07:07:20.743176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.694 BaseBdev1 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 BaseBdev2_malloc 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 true 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 [2024-11-20 07:07:20.824122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:38.694 [2024-11-20 07:07:20.824211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.694 [2024-11-20 07:07:20.824237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:38.694 [2024-11-20 07:07:20.824251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.694 [2024-11-20 07:07:20.827230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.694 [2024-11-20 07:07:20.827281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:38.694 BaseBdev2 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 [2024-11-20 07:07:20.836223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.694 [2024-11-20 07:07:20.838804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.694 [2024-11-20 07:07:20.839058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.694 [2024-11-20 07:07:20.839087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:38.694 [2024-11-20 07:07:20.839447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:38.694 [2024-11-20 07:07:20.839695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.694 [2024-11-20 07:07:20.839718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:38.694 [2024-11-20 07:07:20.839949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.694 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.694 "name": "raid_bdev1", 00:10:38.694 "uuid": "42f28014-23f2-4d6e-9339-5003cd88d8e3", 00:10:38.694 "strip_size_kb": 64, 00:10:38.694 "state": "online", 00:10:38.695 "raid_level": "raid0", 00:10:38.695 "superblock": true, 00:10:38.695 "num_base_bdevs": 2, 00:10:38.695 "num_base_bdevs_discovered": 2, 00:10:38.695 "num_base_bdevs_operational": 2, 00:10:38.695 "base_bdevs_list": [ 00:10:38.695 { 00:10:38.695 "name": "BaseBdev1", 00:10:38.695 "uuid": "3df11fe9-e61d-58ae-859e-e42779f98b50", 00:10:38.695 "is_configured": true, 00:10:38.695 "data_offset": 2048, 00:10:38.695 "data_size": 63488 00:10:38.695 }, 00:10:38.695 { 00:10:38.695 "name": "BaseBdev2", 00:10:38.695 "uuid": "9a0f4e76-44f5-5139-aaca-c4c8b1ee9825", 00:10:38.695 "is_configured": true, 00:10:38.695 "data_offset": 2048, 00:10:38.695 "data_size": 63488 00:10:38.695 } 00:10:38.695 ] 00:10:38.695 }' 00:10:38.695 07:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.695 07:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.259 07:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.259 07:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.260 [2024-11-20 07:07:21.413042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.195 "name": "raid_bdev1", 00:10:40.195 "uuid": "42f28014-23f2-4d6e-9339-5003cd88d8e3", 00:10:40.195 "strip_size_kb": 64, 00:10:40.195 "state": "online", 00:10:40.195 "raid_level": "raid0", 00:10:40.195 "superblock": true, 00:10:40.195 "num_base_bdevs": 2, 00:10:40.195 "num_base_bdevs_discovered": 2, 00:10:40.195 "num_base_bdevs_operational": 2, 00:10:40.195 "base_bdevs_list": [ 00:10:40.195 { 00:10:40.195 "name": "BaseBdev1", 00:10:40.195 "uuid": "3df11fe9-e61d-58ae-859e-e42779f98b50", 00:10:40.195 "is_configured": true, 00:10:40.195 "data_offset": 2048, 00:10:40.195 "data_size": 63488 00:10:40.195 }, 00:10:40.195 { 00:10:40.195 "name": "BaseBdev2", 00:10:40.195 "uuid": "9a0f4e76-44f5-5139-aaca-c4c8b1ee9825", 00:10:40.195 "is_configured": true, 00:10:40.195 "data_offset": 2048, 00:10:40.195 "data_size": 63488 00:10:40.195 } 00:10:40.195 ] 00:10:40.195 }' 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.195 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.763 [2024-11-20 07:07:22.760545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.763 [2024-11-20 07:07:22.760609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.763 [2024-11-20 07:07:22.764060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.763 [2024-11-20 07:07:22.764123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.763 [2024-11-20 07:07:22.764167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.763 [2024-11-20 07:07:22.764182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:40.763 { 00:10:40.763 "results": [ 00:10:40.763 { 00:10:40.763 "job": "raid_bdev1", 00:10:40.763 "core_mask": "0x1", 00:10:40.763 "workload": "randrw", 00:10:40.763 "percentage": 50, 00:10:40.763 "status": "finished", 00:10:40.763 "queue_depth": 1, 00:10:40.763 "io_size": 131072, 00:10:40.763 "runtime": 1.34755, 00:10:40.763 "iops": 11559.496864680346, 00:10:40.763 "mibps": 1444.9371080850433, 00:10:40.763 "io_failed": 1, 00:10:40.763 "io_timeout": 0, 00:10:40.763 "avg_latency_us": 121.56418014207696, 00:10:40.763 "min_latency_us": 31.972052401746726, 00:10:40.763 "max_latency_us": 2289.467248908297 00:10:40.763 } 00:10:40.763 ], 00:10:40.763 "core_count": 1 00:10:40.763 } 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61670 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61670 ']' 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61670 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61670 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.763 killing process with pid 61670 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61670' 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61670 00:10:40.763 [2024-11-20 07:07:22.800990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.763 07:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61670 00:10:40.763 [2024-11-20 07:07:22.982012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iproMslj64 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:42.669 00:10:42.669 real 0m4.803s 00:10:42.669 user 0m5.614s 00:10:42.669 sys 0m0.678s 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.669 07:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.669 ************************************ 00:10:42.669 END TEST raid_read_error_test 00:10:42.669 ************************************ 00:10:42.669 07:07:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:42.669 07:07:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.669 07:07:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.669 07:07:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.669 ************************************ 00:10:42.669 START TEST raid_write_error_test 00:10:42.669 ************************************ 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tc9bMw61uG 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61821 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61821 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61821 ']' 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.669 07:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.669 [2024-11-20 07:07:24.643938] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:42.669 [2024-11-20 07:07:24.644077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61821 ] 00:10:42.669 [2024-11-20 07:07:24.806614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.929 [2024-11-20 07:07:24.961696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.188 [2024-11-20 07:07:25.239400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.188 [2024-11-20 07:07:25.239494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.447 BaseBdev1_malloc 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.447 true 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.447 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.447 [2024-11-20 07:07:25.631930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.447 [2024-11-20 07:07:25.632006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.447 [2024-11-20 07:07:25.632032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.447 [2024-11-20 07:07:25.632045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.448 [2024-11-20 07:07:25.634855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.448 [2024-11-20 07:07:25.634900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.448 BaseBdev1 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.448 BaseBdev2_malloc 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.448 true 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.448 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.707 [2024-11-20 07:07:25.712847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.707 [2024-11-20 07:07:25.712926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.707 [2024-11-20 07:07:25.712949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.707 [2024-11-20 07:07:25.712962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.707 [2024-11-20 07:07:25.715714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.707 [2024-11-20 07:07:25.715757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.707 BaseBdev2 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.707 [2024-11-20 07:07:25.724911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.707 [2024-11-20 07:07:25.727328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.707 [2024-11-20 07:07:25.727577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:43.707 [2024-11-20 07:07:25.727604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:43.707 [2024-11-20 07:07:25.727922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:43.707 [2024-11-20 07:07:25.728169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:43.707 [2024-11-20 07:07:25.728191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:43.707 [2024-11-20 07:07:25.728401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.707 "name": "raid_bdev1", 00:10:43.707 "uuid": "75bc971f-9780-461a-916f-fe6398753a3c", 00:10:43.707 "strip_size_kb": 64, 00:10:43.707 "state": "online", 00:10:43.707 "raid_level": "raid0", 00:10:43.707 "superblock": true, 00:10:43.707 "num_base_bdevs": 2, 00:10:43.707 "num_base_bdevs_discovered": 2, 00:10:43.707 "num_base_bdevs_operational": 2, 00:10:43.707 "base_bdevs_list": [ 00:10:43.707 { 00:10:43.707 "name": "BaseBdev1", 00:10:43.707 "uuid": "614bff6e-1704-51bc-9f4d-407f8bc98d6f", 00:10:43.707 "is_configured": true, 00:10:43.707 "data_offset": 2048, 00:10:43.707 "data_size": 63488 00:10:43.707 }, 00:10:43.707 { 00:10:43.707 "name": "BaseBdev2", 00:10:43.707 "uuid": "1474d384-eefe-51c6-94b3-0ae71485e895", 00:10:43.707 "is_configured": true, 00:10:43.707 "data_offset": 2048, 00:10:43.707 "data_size": 63488 00:10:43.707 } 00:10:43.707 ] 00:10:43.707 }' 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.707 07:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.966 07:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.966 07:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.225 [2024-11-20 07:07:26.274231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.163 "name": "raid_bdev1", 00:10:45.163 "uuid": "75bc971f-9780-461a-916f-fe6398753a3c", 00:10:45.163 "strip_size_kb": 64, 00:10:45.163 "state": "online", 00:10:45.163 "raid_level": "raid0", 00:10:45.163 "superblock": true, 00:10:45.163 "num_base_bdevs": 2, 00:10:45.163 "num_base_bdevs_discovered": 2, 00:10:45.163 "num_base_bdevs_operational": 2, 00:10:45.163 "base_bdevs_list": [ 00:10:45.163 { 00:10:45.163 "name": "BaseBdev1", 00:10:45.163 "uuid": "614bff6e-1704-51bc-9f4d-407f8bc98d6f", 00:10:45.163 "is_configured": true, 00:10:45.163 "data_offset": 2048, 00:10:45.163 "data_size": 63488 00:10:45.163 }, 00:10:45.163 { 00:10:45.163 "name": "BaseBdev2", 00:10:45.163 "uuid": "1474d384-eefe-51c6-94b3-0ae71485e895", 00:10:45.163 "is_configured": true, 00:10:45.163 "data_offset": 2048, 00:10:45.163 "data_size": 63488 00:10:45.163 } 00:10:45.163 ] 00:10:45.163 }' 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.163 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 [2024-11-20 07:07:27.617392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.431 [2024-11-20 07:07:27.617462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.431 [2024-11-20 07:07:27.620619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.431 [2024-11-20 07:07:27.620676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.431 [2024-11-20 07:07:27.620718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.431 [2024-11-20 07:07:27.620733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.431 { 00:10:45.431 "results": [ 00:10:45.431 { 00:10:45.431 "job": "raid_bdev1", 00:10:45.431 "core_mask": "0x1", 00:10:45.431 "workload": "randrw", 00:10:45.431 "percentage": 50, 00:10:45.431 "status": "finished", 00:10:45.431 "queue_depth": 1, 00:10:45.431 "io_size": 131072, 00:10:45.431 "runtime": 1.343227, 00:10:45.431 "iops": 10897.636810457205, 00:10:45.431 "mibps": 1362.2046013071506, 00:10:45.431 "io_failed": 1, 00:10:45.431 "io_timeout": 0, 00:10:45.431 "avg_latency_us": 129.0253803696592, 00:10:45.431 "min_latency_us": 32.866375545851525, 00:10:45.431 "max_latency_us": 1810.1100436681222 00:10:45.431 } 00:10:45.431 ], 00:10:45.431 "core_count": 1 00:10:45.431 } 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61821 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61821 ']' 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61821 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61821 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61821' 00:10:45.431 killing process with pid 61821 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61821 00:10:45.431 [2024-11-20 07:07:27.655910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.431 07:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61821 00:10:45.703 [2024-11-20 07:07:27.828525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tc9bMw61uG 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:47.077 00:10:47.077 real 0m4.784s 00:10:47.077 user 0m5.629s 00:10:47.077 sys 0m0.636s 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.077 07:07:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.077 ************************************ 00:10:47.077 END TEST raid_write_error_test 00:10:47.077 ************************************ 00:10:47.337 07:07:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:47.337 07:07:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:47.337 07:07:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.337 07:07:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.337 07:07:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.337 ************************************ 00:10:47.337 START TEST raid_state_function_test 00:10:47.337 ************************************ 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61965 00:10:47.337 Process raid pid: 61965 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61965' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61965 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61965 ']' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.337 07:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.337 [2024-11-20 07:07:29.489346] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:47.337 [2024-11-20 07:07:29.489529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.595 [2024-11-20 07:07:29.674376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.595 [2024-11-20 07:07:29.835917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.855 [2024-11-20 07:07:30.090752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.855 [2024-11-20 07:07:30.090821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.114 [2024-11-20 07:07:30.350631] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.114 [2024-11-20 07:07:30.350716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.114 [2024-11-20 07:07:30.350731] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.114 [2024-11-20 07:07:30.350744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.114 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.384 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.384 "name": "Existed_Raid", 00:10:48.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.384 "strip_size_kb": 64, 00:10:48.384 "state": "configuring", 00:10:48.384 "raid_level": "concat", 00:10:48.384 "superblock": false, 00:10:48.384 "num_base_bdevs": 2, 00:10:48.384 "num_base_bdevs_discovered": 0, 00:10:48.384 "num_base_bdevs_operational": 2, 00:10:48.384 "base_bdevs_list": [ 00:10:48.384 { 00:10:48.384 "name": "BaseBdev1", 00:10:48.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.384 "is_configured": false, 00:10:48.384 "data_offset": 0, 00:10:48.384 "data_size": 0 00:10:48.384 }, 00:10:48.385 { 00:10:48.385 "name": "BaseBdev2", 00:10:48.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.385 "is_configured": false, 00:10:48.385 "data_offset": 0, 00:10:48.385 "data_size": 0 00:10:48.385 } 00:10:48.385 ] 00:10:48.385 }' 00:10:48.385 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.385 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.664 [2024-11-20 07:07:30.877721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.664 [2024-11-20 07:07:30.877786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.664 [2024-11-20 07:07:30.889690] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.664 [2024-11-20 07:07:30.889758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.664 [2024-11-20 07:07:30.889770] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.664 [2024-11-20 07:07:30.889784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.664 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.665 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.924 [2024-11-20 07:07:30.948776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.924 BaseBdev1 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.924 [ 00:10:48.924 { 00:10:48.924 "name": "BaseBdev1", 00:10:48.924 "aliases": [ 00:10:48.924 "0f3a75cf-d99f-4cd9-aff4-888912301911" 00:10:48.924 ], 00:10:48.924 "product_name": "Malloc disk", 00:10:48.924 "block_size": 512, 00:10:48.924 "num_blocks": 65536, 00:10:48.924 "uuid": "0f3a75cf-d99f-4cd9-aff4-888912301911", 00:10:48.924 "assigned_rate_limits": { 00:10:48.924 "rw_ios_per_sec": 0, 00:10:48.924 "rw_mbytes_per_sec": 0, 00:10:48.924 "r_mbytes_per_sec": 0, 00:10:48.924 "w_mbytes_per_sec": 0 00:10:48.924 }, 00:10:48.924 "claimed": true, 00:10:48.924 "claim_type": "exclusive_write", 00:10:48.924 "zoned": false, 00:10:48.924 "supported_io_types": { 00:10:48.924 "read": true, 00:10:48.924 "write": true, 00:10:48.924 "unmap": true, 00:10:48.924 "flush": true, 00:10:48.924 "reset": true, 00:10:48.924 "nvme_admin": false, 00:10:48.924 "nvme_io": false, 00:10:48.924 "nvme_io_md": false, 00:10:48.924 "write_zeroes": true, 00:10:48.924 "zcopy": true, 00:10:48.924 "get_zone_info": false, 00:10:48.924 "zone_management": false, 00:10:48.924 "zone_append": false, 00:10:48.924 "compare": false, 00:10:48.924 "compare_and_write": false, 00:10:48.924 "abort": true, 00:10:48.924 "seek_hole": false, 00:10:48.924 "seek_data": false, 00:10:48.924 "copy": true, 00:10:48.924 "nvme_iov_md": false 00:10:48.924 }, 00:10:48.924 "memory_domains": [ 00:10:48.924 { 00:10:48.924 "dma_device_id": "system", 00:10:48.924 "dma_device_type": 1 00:10:48.924 }, 00:10:48.924 { 00:10:48.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.924 "dma_device_type": 2 00:10:48.924 } 00:10:48.924 ], 00:10:48.924 "driver_specific": {} 00:10:48.924 } 00:10:48.924 ] 00:10:48.924 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.925 07:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.925 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.925 "name": "Existed_Raid", 00:10:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.925 "strip_size_kb": 64, 00:10:48.925 "state": "configuring", 00:10:48.925 "raid_level": "concat", 00:10:48.925 "superblock": false, 00:10:48.925 "num_base_bdevs": 2, 00:10:48.925 "num_base_bdevs_discovered": 1, 00:10:48.925 "num_base_bdevs_operational": 2, 00:10:48.925 "base_bdevs_list": [ 00:10:48.925 { 00:10:48.925 "name": "BaseBdev1", 00:10:48.925 "uuid": "0f3a75cf-d99f-4cd9-aff4-888912301911", 00:10:48.925 "is_configured": true, 00:10:48.925 "data_offset": 0, 00:10:48.925 "data_size": 65536 00:10:48.925 }, 00:10:48.925 { 00:10:48.925 "name": "BaseBdev2", 00:10:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.925 "is_configured": false, 00:10:48.925 "data_offset": 0, 00:10:48.925 "data_size": 0 00:10:48.925 } 00:10:48.925 ] 00:10:48.925 }' 00:10:48.925 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.925 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.492 [2024-11-20 07:07:31.480037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.492 [2024-11-20 07:07:31.480140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.492 [2024-11-20 07:07:31.492088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.492 [2024-11-20 07:07:31.494572] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.492 [2024-11-20 07:07:31.494635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.492 "name": "Existed_Raid", 00:10:49.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.492 "strip_size_kb": 64, 00:10:49.492 "state": "configuring", 00:10:49.492 "raid_level": "concat", 00:10:49.492 "superblock": false, 00:10:49.492 "num_base_bdevs": 2, 00:10:49.492 "num_base_bdevs_discovered": 1, 00:10:49.492 "num_base_bdevs_operational": 2, 00:10:49.492 "base_bdevs_list": [ 00:10:49.492 { 00:10:49.492 "name": "BaseBdev1", 00:10:49.492 "uuid": "0f3a75cf-d99f-4cd9-aff4-888912301911", 00:10:49.492 "is_configured": true, 00:10:49.492 "data_offset": 0, 00:10:49.492 "data_size": 65536 00:10:49.492 }, 00:10:49.492 { 00:10:49.492 "name": "BaseBdev2", 00:10:49.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.492 "is_configured": false, 00:10:49.492 "data_offset": 0, 00:10:49.492 "data_size": 0 00:10:49.492 } 00:10:49.492 ] 00:10:49.492 }' 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.492 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.751 [2024-11-20 07:07:31.997019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.751 [2024-11-20 07:07:31.997218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.751 [2024-11-20 07:07:31.997248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:49.751 [2024-11-20 07:07:31.997692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:49.751 [2024-11-20 07:07:31.997966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.751 [2024-11-20 07:07:31.998024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.751 [2024-11-20 07:07:31.998424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.751 BaseBdev2 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.751 07:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.751 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.008 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.008 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.008 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.008 [ 00:10:50.008 { 00:10:50.008 "name": "BaseBdev2", 00:10:50.008 "aliases": [ 00:10:50.008 "3a033ca4-fc3d-47aa-a994-2fb9be9f8270" 00:10:50.008 ], 00:10:50.008 "product_name": "Malloc disk", 00:10:50.008 "block_size": 512, 00:10:50.008 "num_blocks": 65536, 00:10:50.008 "uuid": "3a033ca4-fc3d-47aa-a994-2fb9be9f8270", 00:10:50.008 "assigned_rate_limits": { 00:10:50.008 "rw_ios_per_sec": 0, 00:10:50.008 "rw_mbytes_per_sec": 0, 00:10:50.008 "r_mbytes_per_sec": 0, 00:10:50.008 "w_mbytes_per_sec": 0 00:10:50.008 }, 00:10:50.008 "claimed": true, 00:10:50.008 "claim_type": "exclusive_write", 00:10:50.008 "zoned": false, 00:10:50.008 "supported_io_types": { 00:10:50.008 "read": true, 00:10:50.008 "write": true, 00:10:50.008 "unmap": true, 00:10:50.008 "flush": true, 00:10:50.008 "reset": true, 00:10:50.008 "nvme_admin": false, 00:10:50.008 "nvme_io": false, 00:10:50.008 "nvme_io_md": false, 00:10:50.008 "write_zeroes": true, 00:10:50.008 "zcopy": true, 00:10:50.008 "get_zone_info": false, 00:10:50.008 "zone_management": false, 00:10:50.008 "zone_append": false, 00:10:50.008 "compare": false, 00:10:50.008 "compare_and_write": false, 00:10:50.008 "abort": true, 00:10:50.009 "seek_hole": false, 00:10:50.009 "seek_data": false, 00:10:50.009 "copy": true, 00:10:50.009 "nvme_iov_md": false 00:10:50.009 }, 00:10:50.009 "memory_domains": [ 00:10:50.009 { 00:10:50.009 "dma_device_id": "system", 00:10:50.009 "dma_device_type": 1 00:10:50.009 }, 00:10:50.009 { 00:10:50.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.009 "dma_device_type": 2 00:10:50.009 } 00:10:50.009 ], 00:10:50.009 "driver_specific": {} 00:10:50.009 } 00:10:50.009 ] 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.009 "name": "Existed_Raid", 00:10:50.009 "uuid": "1160c7fd-b922-46a8-9ab7-72f16843602c", 00:10:50.009 "strip_size_kb": 64, 00:10:50.009 "state": "online", 00:10:50.009 "raid_level": "concat", 00:10:50.009 "superblock": false, 00:10:50.009 "num_base_bdevs": 2, 00:10:50.009 "num_base_bdevs_discovered": 2, 00:10:50.009 "num_base_bdevs_operational": 2, 00:10:50.009 "base_bdevs_list": [ 00:10:50.009 { 00:10:50.009 "name": "BaseBdev1", 00:10:50.009 "uuid": "0f3a75cf-d99f-4cd9-aff4-888912301911", 00:10:50.009 "is_configured": true, 00:10:50.009 "data_offset": 0, 00:10:50.009 "data_size": 65536 00:10:50.009 }, 00:10:50.009 { 00:10:50.009 "name": "BaseBdev2", 00:10:50.009 "uuid": "3a033ca4-fc3d-47aa-a994-2fb9be9f8270", 00:10:50.009 "is_configured": true, 00:10:50.009 "data_offset": 0, 00:10:50.009 "data_size": 65536 00:10:50.009 } 00:10:50.009 ] 00:10:50.009 }' 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.009 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.266 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 [2024-11-20 07:07:32.516683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.525 "name": "Existed_Raid", 00:10:50.525 "aliases": [ 00:10:50.525 "1160c7fd-b922-46a8-9ab7-72f16843602c" 00:10:50.525 ], 00:10:50.525 "product_name": "Raid Volume", 00:10:50.525 "block_size": 512, 00:10:50.525 "num_blocks": 131072, 00:10:50.525 "uuid": "1160c7fd-b922-46a8-9ab7-72f16843602c", 00:10:50.525 "assigned_rate_limits": { 00:10:50.525 "rw_ios_per_sec": 0, 00:10:50.525 "rw_mbytes_per_sec": 0, 00:10:50.525 "r_mbytes_per_sec": 0, 00:10:50.525 "w_mbytes_per_sec": 0 00:10:50.525 }, 00:10:50.525 "claimed": false, 00:10:50.525 "zoned": false, 00:10:50.525 "supported_io_types": { 00:10:50.525 "read": true, 00:10:50.525 "write": true, 00:10:50.525 "unmap": true, 00:10:50.525 "flush": true, 00:10:50.525 "reset": true, 00:10:50.525 "nvme_admin": false, 00:10:50.525 "nvme_io": false, 00:10:50.525 "nvme_io_md": false, 00:10:50.525 "write_zeroes": true, 00:10:50.525 "zcopy": false, 00:10:50.525 "get_zone_info": false, 00:10:50.525 "zone_management": false, 00:10:50.525 "zone_append": false, 00:10:50.525 "compare": false, 00:10:50.525 "compare_and_write": false, 00:10:50.525 "abort": false, 00:10:50.525 "seek_hole": false, 00:10:50.525 "seek_data": false, 00:10:50.525 "copy": false, 00:10:50.525 "nvme_iov_md": false 00:10:50.525 }, 00:10:50.525 "memory_domains": [ 00:10:50.525 { 00:10:50.525 "dma_device_id": "system", 00:10:50.525 "dma_device_type": 1 00:10:50.525 }, 00:10:50.525 { 00:10:50.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.525 "dma_device_type": 2 00:10:50.525 }, 00:10:50.525 { 00:10:50.525 "dma_device_id": "system", 00:10:50.525 "dma_device_type": 1 00:10:50.525 }, 00:10:50.525 { 00:10:50.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.525 "dma_device_type": 2 00:10:50.525 } 00:10:50.525 ], 00:10:50.525 "driver_specific": { 00:10:50.525 "raid": { 00:10:50.525 "uuid": "1160c7fd-b922-46a8-9ab7-72f16843602c", 00:10:50.525 "strip_size_kb": 64, 00:10:50.525 "state": "online", 00:10:50.525 "raid_level": "concat", 00:10:50.525 "superblock": false, 00:10:50.525 "num_base_bdevs": 2, 00:10:50.525 "num_base_bdevs_discovered": 2, 00:10:50.525 "num_base_bdevs_operational": 2, 00:10:50.525 "base_bdevs_list": [ 00:10:50.525 { 00:10:50.525 "name": "BaseBdev1", 00:10:50.525 "uuid": "0f3a75cf-d99f-4cd9-aff4-888912301911", 00:10:50.525 "is_configured": true, 00:10:50.525 "data_offset": 0, 00:10:50.525 "data_size": 65536 00:10:50.525 }, 00:10:50.525 { 00:10:50.525 "name": "BaseBdev2", 00:10:50.525 "uuid": "3a033ca4-fc3d-47aa-a994-2fb9be9f8270", 00:10:50.525 "is_configured": true, 00:10:50.525 "data_offset": 0, 00:10:50.525 "data_size": 65536 00:10:50.525 } 00:10:50.525 ] 00:10:50.525 } 00:10:50.525 } 00:10:50.525 }' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.525 BaseBdev2' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.525 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 [2024-11-20 07:07:32.751982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.525 [2024-11-20 07:07:32.752142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.525 [2024-11-20 07:07:32.752244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.784 "name": "Existed_Raid", 00:10:50.784 "uuid": "1160c7fd-b922-46a8-9ab7-72f16843602c", 00:10:50.784 "strip_size_kb": 64, 00:10:50.784 "state": "offline", 00:10:50.784 "raid_level": "concat", 00:10:50.784 "superblock": false, 00:10:50.784 "num_base_bdevs": 2, 00:10:50.784 "num_base_bdevs_discovered": 1, 00:10:50.784 "num_base_bdevs_operational": 1, 00:10:50.784 "base_bdevs_list": [ 00:10:50.784 { 00:10:50.784 "name": null, 00:10:50.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.784 "is_configured": false, 00:10:50.784 "data_offset": 0, 00:10:50.784 "data_size": 65536 00:10:50.784 }, 00:10:50.784 { 00:10:50.784 "name": "BaseBdev2", 00:10:50.784 "uuid": "3a033ca4-fc3d-47aa-a994-2fb9be9f8270", 00:10:50.784 "is_configured": true, 00:10:50.784 "data_offset": 0, 00:10:50.784 "data_size": 65536 00:10:50.784 } 00:10:50.784 ] 00:10:50.784 }' 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.784 07:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.350 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.350 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.350 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.350 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.350 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.351 [2024-11-20 07:07:33.379743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.351 [2024-11-20 07:07:33.379942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61965 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61965 ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61965 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61965 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61965' 00:10:51.351 killing process with pid 61965 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61965 00:10:51.351 [2024-11-20 07:07:33.603662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.351 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61965 00:10:51.609 [2024-11-20 07:07:33.626170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.986 ************************************ 00:10:52.986 END TEST raid_state_function_test 00:10:52.986 ************************************ 00:10:52.986 00:10:52.986 real 0m5.727s 00:10:52.986 user 0m8.026s 00:10:52.986 sys 0m0.949s 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.986 07:07:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:52.986 07:07:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.986 07:07:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.986 07:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.986 ************************************ 00:10:52.986 START TEST raid_state_function_test_sb 00:10:52.986 ************************************ 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:52.986 Process raid pid: 62223 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62223 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62223' 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62223 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62223 ']' 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.986 07:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.245 [2024-11-20 07:07:35.264333] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:53.245 [2024-11-20 07:07:35.264571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.245 [2024-11-20 07:07:35.445000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.505 [2024-11-20 07:07:35.619052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.763 [2024-11-20 07:07:35.910579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.763 [2024-11-20 07:07:35.910786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.021 [2024-11-20 07:07:36.178991] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.021 [2024-11-20 07:07:36.179170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.021 [2024-11-20 07:07:36.179214] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.021 [2024-11-20 07:07:36.179256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.021 "name": "Existed_Raid", 00:10:54.021 "uuid": "d7a08161-5db0-4597-912f-20b523b8f1be", 00:10:54.021 "strip_size_kb": 64, 00:10:54.021 "state": "configuring", 00:10:54.021 "raid_level": "concat", 00:10:54.021 "superblock": true, 00:10:54.021 "num_base_bdevs": 2, 00:10:54.021 "num_base_bdevs_discovered": 0, 00:10:54.021 "num_base_bdevs_operational": 2, 00:10:54.021 "base_bdevs_list": [ 00:10:54.021 { 00:10:54.021 "name": "BaseBdev1", 00:10:54.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.021 "is_configured": false, 00:10:54.021 "data_offset": 0, 00:10:54.021 "data_size": 0 00:10:54.021 }, 00:10:54.021 { 00:10:54.021 "name": "BaseBdev2", 00:10:54.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.021 "is_configured": false, 00:10:54.021 "data_offset": 0, 00:10:54.021 "data_size": 0 00:10:54.021 } 00:10:54.021 ] 00:10:54.021 }' 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.021 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 [2024-11-20 07:07:36.626608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.586 [2024-11-20 07:07:36.626770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 [2024-11-20 07:07:36.638634] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.586 [2024-11-20 07:07:36.638715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.586 [2024-11-20 07:07:36.638729] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.586 [2024-11-20 07:07:36.638745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 [2024-11-20 07:07:36.699472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.586 BaseBdev1 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 [ 00:10:54.586 { 00:10:54.586 "name": "BaseBdev1", 00:10:54.586 "aliases": [ 00:10:54.586 "80d1d711-c995-4abb-8297-fa09abfedb16" 00:10:54.586 ], 00:10:54.586 "product_name": "Malloc disk", 00:10:54.586 "block_size": 512, 00:10:54.586 "num_blocks": 65536, 00:10:54.586 "uuid": "80d1d711-c995-4abb-8297-fa09abfedb16", 00:10:54.586 "assigned_rate_limits": { 00:10:54.586 "rw_ios_per_sec": 0, 00:10:54.586 "rw_mbytes_per_sec": 0, 00:10:54.586 "r_mbytes_per_sec": 0, 00:10:54.586 "w_mbytes_per_sec": 0 00:10:54.586 }, 00:10:54.586 "claimed": true, 00:10:54.586 "claim_type": "exclusive_write", 00:10:54.586 "zoned": false, 00:10:54.586 "supported_io_types": { 00:10:54.586 "read": true, 00:10:54.586 "write": true, 00:10:54.586 "unmap": true, 00:10:54.586 "flush": true, 00:10:54.586 "reset": true, 00:10:54.586 "nvme_admin": false, 00:10:54.586 "nvme_io": false, 00:10:54.586 "nvme_io_md": false, 00:10:54.586 "write_zeroes": true, 00:10:54.586 "zcopy": true, 00:10:54.586 "get_zone_info": false, 00:10:54.586 "zone_management": false, 00:10:54.586 "zone_append": false, 00:10:54.586 "compare": false, 00:10:54.586 "compare_and_write": false, 00:10:54.586 "abort": true, 00:10:54.586 "seek_hole": false, 00:10:54.586 "seek_data": false, 00:10:54.586 "copy": true, 00:10:54.586 "nvme_iov_md": false 00:10:54.586 }, 00:10:54.586 "memory_domains": [ 00:10:54.586 { 00:10:54.586 "dma_device_id": "system", 00:10:54.586 "dma_device_type": 1 00:10:54.586 }, 00:10:54.586 { 00:10:54.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.586 "dma_device_type": 2 00:10:54.586 } 00:10:54.586 ], 00:10:54.586 "driver_specific": {} 00:10:54.586 } 00:10:54.586 ] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.586 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.586 "name": "Existed_Raid", 00:10:54.586 "uuid": "e91bc1a8-c42e-46ee-aea1-29c095fab482", 00:10:54.586 "strip_size_kb": 64, 00:10:54.586 "state": "configuring", 00:10:54.586 "raid_level": "concat", 00:10:54.586 "superblock": true, 00:10:54.586 "num_base_bdevs": 2, 00:10:54.587 "num_base_bdevs_discovered": 1, 00:10:54.587 "num_base_bdevs_operational": 2, 00:10:54.587 "base_bdevs_list": [ 00:10:54.587 { 00:10:54.587 "name": "BaseBdev1", 00:10:54.587 "uuid": "80d1d711-c995-4abb-8297-fa09abfedb16", 00:10:54.587 "is_configured": true, 00:10:54.587 "data_offset": 2048, 00:10:54.587 "data_size": 63488 00:10:54.587 }, 00:10:54.587 { 00:10:54.587 "name": "BaseBdev2", 00:10:54.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.587 "is_configured": false, 00:10:54.587 "data_offset": 0, 00:10:54.587 "data_size": 0 00:10:54.587 } 00:10:54.587 ] 00:10:54.587 }' 00:10:54.587 07:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.587 07:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.154 [2024-11-20 07:07:37.158946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.154 [2024-11-20 07:07:37.159046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.154 [2024-11-20 07:07:37.167017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.154 [2024-11-20 07:07:37.169686] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.154 [2024-11-20 07:07:37.169841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.154 "name": "Existed_Raid", 00:10:55.154 "uuid": "7ecd0c0c-2093-495f-a3b4-5df11c513868", 00:10:55.154 "strip_size_kb": 64, 00:10:55.154 "state": "configuring", 00:10:55.154 "raid_level": "concat", 00:10:55.154 "superblock": true, 00:10:55.154 "num_base_bdevs": 2, 00:10:55.154 "num_base_bdevs_discovered": 1, 00:10:55.154 "num_base_bdevs_operational": 2, 00:10:55.154 "base_bdevs_list": [ 00:10:55.154 { 00:10:55.154 "name": "BaseBdev1", 00:10:55.154 "uuid": "80d1d711-c995-4abb-8297-fa09abfedb16", 00:10:55.154 "is_configured": true, 00:10:55.154 "data_offset": 2048, 00:10:55.154 "data_size": 63488 00:10:55.154 }, 00:10:55.154 { 00:10:55.154 "name": "BaseBdev2", 00:10:55.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.154 "is_configured": false, 00:10:55.154 "data_offset": 0, 00:10:55.154 "data_size": 0 00:10:55.154 } 00:10:55.154 ] 00:10:55.154 }' 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.154 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 [2024-11-20 07:07:37.652550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.413 [2024-11-20 07:07:37.653134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.413 [2024-11-20 07:07:37.653203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:55.413 [2024-11-20 07:07:37.653611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:55.413 BaseBdev2 00:10:55.413 [2024-11-20 07:07:37.653857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.413 [2024-11-20 07:07:37.653923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.413 [2024-11-20 07:07:37.654166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.413 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 [ 00:10:55.413 { 00:10:55.413 "name": "BaseBdev2", 00:10:55.413 "aliases": [ 00:10:55.413 "3c157854-b824-4435-8527-c56e8e872d3c" 00:10:55.413 ], 00:10:55.413 "product_name": "Malloc disk", 00:10:55.413 "block_size": 512, 00:10:55.413 "num_blocks": 65536, 00:10:55.413 "uuid": "3c157854-b824-4435-8527-c56e8e872d3c", 00:10:55.413 "assigned_rate_limits": { 00:10:55.413 "rw_ios_per_sec": 0, 00:10:55.413 "rw_mbytes_per_sec": 0, 00:10:55.413 "r_mbytes_per_sec": 0, 00:10:55.413 "w_mbytes_per_sec": 0 00:10:55.413 }, 00:10:55.413 "claimed": true, 00:10:55.413 "claim_type": "exclusive_write", 00:10:55.413 "zoned": false, 00:10:55.413 "supported_io_types": { 00:10:55.671 "read": true, 00:10:55.671 "write": true, 00:10:55.671 "unmap": true, 00:10:55.671 "flush": true, 00:10:55.671 "reset": true, 00:10:55.671 "nvme_admin": false, 00:10:55.671 "nvme_io": false, 00:10:55.671 "nvme_io_md": false, 00:10:55.671 "write_zeroes": true, 00:10:55.671 "zcopy": true, 00:10:55.671 "get_zone_info": false, 00:10:55.671 "zone_management": false, 00:10:55.671 "zone_append": false, 00:10:55.671 "compare": false, 00:10:55.671 "compare_and_write": false, 00:10:55.671 "abort": true, 00:10:55.671 "seek_hole": false, 00:10:55.671 "seek_data": false, 00:10:55.671 "copy": true, 00:10:55.671 "nvme_iov_md": false 00:10:55.671 }, 00:10:55.671 "memory_domains": [ 00:10:55.671 { 00:10:55.671 "dma_device_id": "system", 00:10:55.671 "dma_device_type": 1 00:10:55.671 }, 00:10:55.671 { 00:10:55.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.671 "dma_device_type": 2 00:10:55.671 } 00:10:55.671 ], 00:10:55.671 "driver_specific": {} 00:10:55.671 } 00:10:55.671 ] 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.671 "name": "Existed_Raid", 00:10:55.671 "uuid": "7ecd0c0c-2093-495f-a3b4-5df11c513868", 00:10:55.671 "strip_size_kb": 64, 00:10:55.671 "state": "online", 00:10:55.671 "raid_level": "concat", 00:10:55.671 "superblock": true, 00:10:55.671 "num_base_bdevs": 2, 00:10:55.671 "num_base_bdevs_discovered": 2, 00:10:55.671 "num_base_bdevs_operational": 2, 00:10:55.671 "base_bdevs_list": [ 00:10:55.671 { 00:10:55.671 "name": "BaseBdev1", 00:10:55.671 "uuid": "80d1d711-c995-4abb-8297-fa09abfedb16", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 }, 00:10:55.671 { 00:10:55.671 "name": "BaseBdev2", 00:10:55.671 "uuid": "3c157854-b824-4435-8527-c56e8e872d3c", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 } 00:10:55.671 ] 00:10:55.671 }' 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.671 07:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.928 [2024-11-20 07:07:38.128526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.928 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.928 "name": "Existed_Raid", 00:10:55.928 "aliases": [ 00:10:55.928 "7ecd0c0c-2093-495f-a3b4-5df11c513868" 00:10:55.928 ], 00:10:55.928 "product_name": "Raid Volume", 00:10:55.928 "block_size": 512, 00:10:55.928 "num_blocks": 126976, 00:10:55.928 "uuid": "7ecd0c0c-2093-495f-a3b4-5df11c513868", 00:10:55.928 "assigned_rate_limits": { 00:10:55.928 "rw_ios_per_sec": 0, 00:10:55.928 "rw_mbytes_per_sec": 0, 00:10:55.928 "r_mbytes_per_sec": 0, 00:10:55.928 "w_mbytes_per_sec": 0 00:10:55.928 }, 00:10:55.928 "claimed": false, 00:10:55.928 "zoned": false, 00:10:55.928 "supported_io_types": { 00:10:55.928 "read": true, 00:10:55.928 "write": true, 00:10:55.928 "unmap": true, 00:10:55.928 "flush": true, 00:10:55.928 "reset": true, 00:10:55.928 "nvme_admin": false, 00:10:55.928 "nvme_io": false, 00:10:55.928 "nvme_io_md": false, 00:10:55.928 "write_zeroes": true, 00:10:55.928 "zcopy": false, 00:10:55.928 "get_zone_info": false, 00:10:55.928 "zone_management": false, 00:10:55.928 "zone_append": false, 00:10:55.928 "compare": false, 00:10:55.928 "compare_and_write": false, 00:10:55.928 "abort": false, 00:10:55.928 "seek_hole": false, 00:10:55.928 "seek_data": false, 00:10:55.928 "copy": false, 00:10:55.928 "nvme_iov_md": false 00:10:55.928 }, 00:10:55.928 "memory_domains": [ 00:10:55.928 { 00:10:55.928 "dma_device_id": "system", 00:10:55.928 "dma_device_type": 1 00:10:55.928 }, 00:10:55.928 { 00:10:55.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.928 "dma_device_type": 2 00:10:55.928 }, 00:10:55.928 { 00:10:55.928 "dma_device_id": "system", 00:10:55.928 "dma_device_type": 1 00:10:55.928 }, 00:10:55.928 { 00:10:55.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.928 "dma_device_type": 2 00:10:55.928 } 00:10:55.928 ], 00:10:55.928 "driver_specific": { 00:10:55.928 "raid": { 00:10:55.928 "uuid": "7ecd0c0c-2093-495f-a3b4-5df11c513868", 00:10:55.928 "strip_size_kb": 64, 00:10:55.928 "state": "online", 00:10:55.928 "raid_level": "concat", 00:10:55.928 "superblock": true, 00:10:55.928 "num_base_bdevs": 2, 00:10:55.928 "num_base_bdevs_discovered": 2, 00:10:55.929 "num_base_bdevs_operational": 2, 00:10:55.929 "base_bdevs_list": [ 00:10:55.929 { 00:10:55.929 "name": "BaseBdev1", 00:10:55.929 "uuid": "80d1d711-c995-4abb-8297-fa09abfedb16", 00:10:55.929 "is_configured": true, 00:10:55.929 "data_offset": 2048, 00:10:55.929 "data_size": 63488 00:10:55.929 }, 00:10:55.929 { 00:10:55.929 "name": "BaseBdev2", 00:10:55.929 "uuid": "3c157854-b824-4435-8527-c56e8e872d3c", 00:10:55.929 "is_configured": true, 00:10:55.929 "data_offset": 2048, 00:10:55.929 "data_size": 63488 00:10:55.929 } 00:10:55.929 ] 00:10:55.929 } 00:10:55.929 } 00:10:55.929 }' 00:10:55.929 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.187 BaseBdev2' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.187 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.187 [2024-11-20 07:07:38.355880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.187 [2024-11-20 07:07:38.356057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.187 [2024-11-20 07:07:38.356164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.445 "name": "Existed_Raid", 00:10:56.445 "uuid": "7ecd0c0c-2093-495f-a3b4-5df11c513868", 00:10:56.445 "strip_size_kb": 64, 00:10:56.445 "state": "offline", 00:10:56.445 "raid_level": "concat", 00:10:56.445 "superblock": true, 00:10:56.445 "num_base_bdevs": 2, 00:10:56.445 "num_base_bdevs_discovered": 1, 00:10:56.445 "num_base_bdevs_operational": 1, 00:10:56.445 "base_bdevs_list": [ 00:10:56.445 { 00:10:56.445 "name": null, 00:10:56.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.445 "is_configured": false, 00:10:56.445 "data_offset": 0, 00:10:56.445 "data_size": 63488 00:10:56.445 }, 00:10:56.445 { 00:10:56.445 "name": "BaseBdev2", 00:10:56.445 "uuid": "3c157854-b824-4435-8527-c56e8e872d3c", 00:10:56.445 "is_configured": true, 00:10:56.445 "data_offset": 2048, 00:10:56.445 "data_size": 63488 00:10:56.445 } 00:10:56.445 ] 00:10:56.445 }' 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.445 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.703 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.962 07:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.962 [2024-11-20 07:07:39.016403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.962 [2024-11-20 07:07:39.016598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62223 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62223 ']' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62223 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.962 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62223 00:10:57.220 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.220 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.220 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62223' 00:10:57.220 killing process with pid 62223 00:10:57.220 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62223 00:10:57.220 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62223 00:10:57.220 [2024-11-20 07:07:39.240660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.220 [2024-11-20 07:07:39.261439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.595 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:58.595 00:10:58.595 real 0m5.544s 00:10:58.595 user 0m7.740s 00:10:58.595 sys 0m0.925s 00:10:58.595 ************************************ 00:10:58.595 END TEST raid_state_function_test_sb 00:10:58.595 ************************************ 00:10:58.595 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.595 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 07:07:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:58.595 07:07:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.595 07:07:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.595 07:07:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 ************************************ 00:10:58.595 START TEST raid_superblock_test 00:10:58.595 ************************************ 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62482 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62482 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62482 ']' 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.595 07:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 [2024-11-20 07:07:40.857423] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:58.595 [2024-11-20 07:07:40.857565] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62482 ] 00:10:58.853 [2024-11-20 07:07:41.034495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.111 [2024-11-20 07:07:41.224109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.369 [2024-11-20 07:07:41.492831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.369 [2024-11-20 07:07:41.492932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.627 malloc1 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.627 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.627 [2024-11-20 07:07:41.817227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.627 [2024-11-20 07:07:41.817488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.627 [2024-11-20 07:07:41.817558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.627 [2024-11-20 07:07:41.817598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.627 [2024-11-20 07:07:41.820722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.628 [2024-11-20 07:07:41.820843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.628 pt1 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.628 malloc2 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.628 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.628 [2024-11-20 07:07:41.889140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.628 [2024-11-20 07:07:41.889346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.628 [2024-11-20 07:07:41.889417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.628 [2024-11-20 07:07:41.889459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.886 [2024-11-20 07:07:41.892534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.886 [2024-11-20 07:07:41.892653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.886 pt2 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.886 [2024-11-20 07:07:41.901603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.886 [2024-11-20 07:07:41.904440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.886 [2024-11-20 07:07:41.904721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:59.886 [2024-11-20 07:07:41.904740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:59.886 [2024-11-20 07:07:41.905154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:59.886 [2024-11-20 07:07:41.905398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:59.886 [2024-11-20 07:07:41.905433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:59.886 [2024-11-20 07:07:41.905776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.886 "name": "raid_bdev1", 00:10:59.886 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:10:59.886 "strip_size_kb": 64, 00:10:59.886 "state": "online", 00:10:59.886 "raid_level": "concat", 00:10:59.886 "superblock": true, 00:10:59.886 "num_base_bdevs": 2, 00:10:59.886 "num_base_bdevs_discovered": 2, 00:10:59.886 "num_base_bdevs_operational": 2, 00:10:59.886 "base_bdevs_list": [ 00:10:59.886 { 00:10:59.886 "name": "pt1", 00:10:59.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.886 "is_configured": true, 00:10:59.886 "data_offset": 2048, 00:10:59.886 "data_size": 63488 00:10:59.886 }, 00:10:59.886 { 00:10:59.886 "name": "pt2", 00:10:59.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.886 "is_configured": true, 00:10:59.886 "data_offset": 2048, 00:10:59.886 "data_size": 63488 00:10:59.886 } 00:10:59.886 ] 00:10:59.886 }' 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.886 07:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.143 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.143 [2024-11-20 07:07:42.393873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.402 "name": "raid_bdev1", 00:11:00.402 "aliases": [ 00:11:00.402 "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e" 00:11:00.402 ], 00:11:00.402 "product_name": "Raid Volume", 00:11:00.402 "block_size": 512, 00:11:00.402 "num_blocks": 126976, 00:11:00.402 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:00.402 "assigned_rate_limits": { 00:11:00.402 "rw_ios_per_sec": 0, 00:11:00.402 "rw_mbytes_per_sec": 0, 00:11:00.402 "r_mbytes_per_sec": 0, 00:11:00.402 "w_mbytes_per_sec": 0 00:11:00.402 }, 00:11:00.402 "claimed": false, 00:11:00.402 "zoned": false, 00:11:00.402 "supported_io_types": { 00:11:00.402 "read": true, 00:11:00.402 "write": true, 00:11:00.402 "unmap": true, 00:11:00.402 "flush": true, 00:11:00.402 "reset": true, 00:11:00.402 "nvme_admin": false, 00:11:00.402 "nvme_io": false, 00:11:00.402 "nvme_io_md": false, 00:11:00.402 "write_zeroes": true, 00:11:00.402 "zcopy": false, 00:11:00.402 "get_zone_info": false, 00:11:00.402 "zone_management": false, 00:11:00.402 "zone_append": false, 00:11:00.402 "compare": false, 00:11:00.402 "compare_and_write": false, 00:11:00.402 "abort": false, 00:11:00.402 "seek_hole": false, 00:11:00.402 "seek_data": false, 00:11:00.402 "copy": false, 00:11:00.402 "nvme_iov_md": false 00:11:00.402 }, 00:11:00.402 "memory_domains": [ 00:11:00.402 { 00:11:00.402 "dma_device_id": "system", 00:11:00.402 "dma_device_type": 1 00:11:00.402 }, 00:11:00.402 { 00:11:00.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.402 "dma_device_type": 2 00:11:00.402 }, 00:11:00.402 { 00:11:00.402 "dma_device_id": "system", 00:11:00.402 "dma_device_type": 1 00:11:00.402 }, 00:11:00.402 { 00:11:00.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.402 "dma_device_type": 2 00:11:00.402 } 00:11:00.402 ], 00:11:00.402 "driver_specific": { 00:11:00.402 "raid": { 00:11:00.402 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:00.402 "strip_size_kb": 64, 00:11:00.402 "state": "online", 00:11:00.402 "raid_level": "concat", 00:11:00.402 "superblock": true, 00:11:00.402 "num_base_bdevs": 2, 00:11:00.402 "num_base_bdevs_discovered": 2, 00:11:00.402 "num_base_bdevs_operational": 2, 00:11:00.402 "base_bdevs_list": [ 00:11:00.402 { 00:11:00.402 "name": "pt1", 00:11:00.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.402 "is_configured": true, 00:11:00.402 "data_offset": 2048, 00:11:00.402 "data_size": 63488 00:11:00.402 }, 00:11:00.402 { 00:11:00.402 "name": "pt2", 00:11:00.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.402 "is_configured": true, 00:11:00.402 "data_offset": 2048, 00:11:00.402 "data_size": 63488 00:11:00.402 } 00:11:00.402 ] 00:11:00.402 } 00:11:00.402 } 00:11:00.402 }' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.402 pt2' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.402 [2024-11-20 07:07:42.617463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e ']' 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.402 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.402 [2024-11-20 07:07:42.664993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.661 [2024-11-20 07:07:42.665136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.661 [2024-11-20 07:07:42.665279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.661 [2024-11-20 07:07:42.665365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.661 [2024-11-20 07:07:42.665386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.661 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.661 [2024-11-20 07:07:42.812812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:00.661 [2024-11-20 07:07:42.815365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:00.661 [2024-11-20 07:07:42.815557] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:00.661 [2024-11-20 07:07:42.815630] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:00.661 [2024-11-20 07:07:42.815649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.661 [2024-11-20 07:07:42.815662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:00.661 request: 00:11:00.661 { 00:11:00.661 "name": "raid_bdev1", 00:11:00.661 "raid_level": "concat", 00:11:00.662 "base_bdevs": [ 00:11:00.662 "malloc1", 00:11:00.662 "malloc2" 00:11:00.662 ], 00:11:00.662 "strip_size_kb": 64, 00:11:00.662 "superblock": false, 00:11:00.662 "method": "bdev_raid_create", 00:11:00.662 "req_id": 1 00:11:00.662 } 00:11:00.662 Got JSON-RPC error response 00:11:00.662 response: 00:11:00.662 { 00:11:00.662 "code": -17, 00:11:00.662 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:00.662 } 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.662 [2024-11-20 07:07:42.872673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.662 [2024-11-20 07:07:42.872793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.662 [2024-11-20 07:07:42.872824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:00.662 [2024-11-20 07:07:42.872840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.662 [2024-11-20 07:07:42.875847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.662 [2024-11-20 07:07:42.875922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.662 [2024-11-20 07:07:42.876071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.662 [2024-11-20 07:07:42.876153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.662 pt1 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.662 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.921 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.921 "name": "raid_bdev1", 00:11:00.921 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:00.921 "strip_size_kb": 64, 00:11:00.921 "state": "configuring", 00:11:00.921 "raid_level": "concat", 00:11:00.921 "superblock": true, 00:11:00.921 "num_base_bdevs": 2, 00:11:00.921 "num_base_bdevs_discovered": 1, 00:11:00.921 "num_base_bdevs_operational": 2, 00:11:00.921 "base_bdevs_list": [ 00:11:00.921 { 00:11:00.921 "name": "pt1", 00:11:00.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.921 "is_configured": true, 00:11:00.921 "data_offset": 2048, 00:11:00.921 "data_size": 63488 00:11:00.921 }, 00:11:00.921 { 00:11:00.921 "name": null, 00:11:00.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.921 "is_configured": false, 00:11:00.921 "data_offset": 2048, 00:11:00.921 "data_size": 63488 00:11:00.921 } 00:11:00.921 ] 00:11:00.921 }' 00:11:00.921 07:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.921 07:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.180 [2024-11-20 07:07:43.276145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.180 [2024-11-20 07:07:43.276394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.180 [2024-11-20 07:07:43.276430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:01.180 [2024-11-20 07:07:43.276445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.180 [2024-11-20 07:07:43.277076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.180 [2024-11-20 07:07:43.277101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.180 [2024-11-20 07:07:43.277217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.180 [2024-11-20 07:07:43.277248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.180 [2024-11-20 07:07:43.277422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:01.180 [2024-11-20 07:07:43.277443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:01.180 [2024-11-20 07:07:43.277734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:01.180 [2024-11-20 07:07:43.277924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:01.180 [2024-11-20 07:07:43.277936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:01.180 [2024-11-20 07:07:43.278109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.180 pt2 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.180 "name": "raid_bdev1", 00:11:01.180 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:01.180 "strip_size_kb": 64, 00:11:01.180 "state": "online", 00:11:01.180 "raid_level": "concat", 00:11:01.180 "superblock": true, 00:11:01.180 "num_base_bdevs": 2, 00:11:01.180 "num_base_bdevs_discovered": 2, 00:11:01.180 "num_base_bdevs_operational": 2, 00:11:01.180 "base_bdevs_list": [ 00:11:01.180 { 00:11:01.180 "name": "pt1", 00:11:01.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.180 "is_configured": true, 00:11:01.180 "data_offset": 2048, 00:11:01.180 "data_size": 63488 00:11:01.180 }, 00:11:01.180 { 00:11:01.180 "name": "pt2", 00:11:01.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.180 "is_configured": true, 00:11:01.180 "data_offset": 2048, 00:11:01.180 "data_size": 63488 00:11:01.180 } 00:11:01.180 ] 00:11:01.180 }' 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.180 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.747 [2024-11-20 07:07:43.783887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.747 "name": "raid_bdev1", 00:11:01.747 "aliases": [ 00:11:01.747 "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e" 00:11:01.747 ], 00:11:01.747 "product_name": "Raid Volume", 00:11:01.747 "block_size": 512, 00:11:01.747 "num_blocks": 126976, 00:11:01.747 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:01.747 "assigned_rate_limits": { 00:11:01.747 "rw_ios_per_sec": 0, 00:11:01.747 "rw_mbytes_per_sec": 0, 00:11:01.747 "r_mbytes_per_sec": 0, 00:11:01.747 "w_mbytes_per_sec": 0 00:11:01.747 }, 00:11:01.747 "claimed": false, 00:11:01.747 "zoned": false, 00:11:01.747 "supported_io_types": { 00:11:01.747 "read": true, 00:11:01.747 "write": true, 00:11:01.747 "unmap": true, 00:11:01.747 "flush": true, 00:11:01.747 "reset": true, 00:11:01.747 "nvme_admin": false, 00:11:01.747 "nvme_io": false, 00:11:01.747 "nvme_io_md": false, 00:11:01.747 "write_zeroes": true, 00:11:01.747 "zcopy": false, 00:11:01.747 "get_zone_info": false, 00:11:01.747 "zone_management": false, 00:11:01.747 "zone_append": false, 00:11:01.747 "compare": false, 00:11:01.747 "compare_and_write": false, 00:11:01.747 "abort": false, 00:11:01.747 "seek_hole": false, 00:11:01.747 "seek_data": false, 00:11:01.747 "copy": false, 00:11:01.747 "nvme_iov_md": false 00:11:01.747 }, 00:11:01.747 "memory_domains": [ 00:11:01.747 { 00:11:01.747 "dma_device_id": "system", 00:11:01.747 "dma_device_type": 1 00:11:01.747 }, 00:11:01.747 { 00:11:01.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.747 "dma_device_type": 2 00:11:01.747 }, 00:11:01.747 { 00:11:01.747 "dma_device_id": "system", 00:11:01.747 "dma_device_type": 1 00:11:01.747 }, 00:11:01.747 { 00:11:01.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.747 "dma_device_type": 2 00:11:01.747 } 00:11:01.747 ], 00:11:01.747 "driver_specific": { 00:11:01.747 "raid": { 00:11:01.747 "uuid": "dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e", 00:11:01.747 "strip_size_kb": 64, 00:11:01.747 "state": "online", 00:11:01.747 "raid_level": "concat", 00:11:01.747 "superblock": true, 00:11:01.747 "num_base_bdevs": 2, 00:11:01.747 "num_base_bdevs_discovered": 2, 00:11:01.747 "num_base_bdevs_operational": 2, 00:11:01.747 "base_bdevs_list": [ 00:11:01.747 { 00:11:01.747 "name": "pt1", 00:11:01.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.747 "is_configured": true, 00:11:01.747 "data_offset": 2048, 00:11:01.747 "data_size": 63488 00:11:01.747 }, 00:11:01.747 { 00:11:01.747 "name": "pt2", 00:11:01.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.747 "is_configured": true, 00:11:01.747 "data_offset": 2048, 00:11:01.747 "data_size": 63488 00:11:01.747 } 00:11:01.747 ] 00:11:01.747 } 00:11:01.747 } 00:11:01.747 }' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.747 pt2' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.747 07:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.013 [2024-11-20 07:07:44.039664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e '!=' dd5a40ea-2137-4d7b-bfd9-308c20d1cf5e ']' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62482 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62482 ']' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62482 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62482 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62482' 00:11:02.013 killing process with pid 62482 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62482 00:11:02.013 [2024-11-20 07:07:44.124443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.013 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62482 00:11:02.013 [2024-11-20 07:07:44.124722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.013 [2024-11-20 07:07:44.124862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.013 [2024-11-20 07:07:44.124921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:02.282 [2024-11-20 07:07:44.391563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.657 07:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.657 00:11:03.657 real 0m5.048s 00:11:03.657 user 0m6.899s 00:11:03.657 sys 0m0.835s 00:11:03.657 07:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.657 07:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.657 ************************************ 00:11:03.657 END TEST raid_superblock_test 00:11:03.657 ************************************ 00:11:03.657 07:07:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:03.657 07:07:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.657 07:07:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.657 07:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.657 ************************************ 00:11:03.657 START TEST raid_read_error_test 00:11:03.657 ************************************ 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UUFMkmsBfC 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62698 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62698 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62698 ']' 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.657 07:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.917 [2024-11-20 07:07:45.995082] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:03.917 [2024-11-20 07:07:45.995380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62698 ] 00:11:03.917 [2024-11-20 07:07:46.179677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.175 [2024-11-20 07:07:46.343696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.433 [2024-11-20 07:07:46.622234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.433 [2024-11-20 07:07:46.622485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.691 BaseBdev1_malloc 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.691 true 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.691 [2024-11-20 07:07:46.943810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.691 [2024-11-20 07:07:46.943906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.691 [2024-11-20 07:07:46.943938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.691 [2024-11-20 07:07:46.943953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.691 [2024-11-20 07:07:46.947017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.691 [2024-11-20 07:07:46.947079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.691 BaseBdev1 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.691 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 BaseBdev2_malloc 00:11:04.950 07:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 true 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 [2024-11-20 07:07:47.011244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.950 [2024-11-20 07:07:47.011352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.950 [2024-11-20 07:07:47.011383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.950 [2024-11-20 07:07:47.011398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.950 [2024-11-20 07:07:47.014425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.950 [2024-11-20 07:07:47.014484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.950 BaseBdev2 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 [2024-11-20 07:07:47.019518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.950 [2024-11-20 07:07:47.022340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.950 [2024-11-20 07:07:47.022649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.950 [2024-11-20 07:07:47.022669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:04.950 [2024-11-20 07:07:47.023020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:04.950 [2024-11-20 07:07:47.023250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.950 [2024-11-20 07:07:47.023265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:04.950 [2024-11-20 07:07:47.023662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.950 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.950 "name": "raid_bdev1", 00:11:04.950 "uuid": "2ad90616-61e0-4be2-8b9d-646d4e862e02", 00:11:04.950 "strip_size_kb": 64, 00:11:04.950 "state": "online", 00:11:04.950 "raid_level": "concat", 00:11:04.950 "superblock": true, 00:11:04.950 "num_base_bdevs": 2, 00:11:04.950 "num_base_bdevs_discovered": 2, 00:11:04.950 "num_base_bdevs_operational": 2, 00:11:04.950 "base_bdevs_list": [ 00:11:04.950 { 00:11:04.951 "name": "BaseBdev1", 00:11:04.951 "uuid": "545923b2-d131-52a7-9cc4-39308f1e1202", 00:11:04.951 "is_configured": true, 00:11:04.951 "data_offset": 2048, 00:11:04.951 "data_size": 63488 00:11:04.951 }, 00:11:04.951 { 00:11:04.951 "name": "BaseBdev2", 00:11:04.951 "uuid": "d8825d1b-8b18-59c7-afa2-967552e28efb", 00:11:04.951 "is_configured": true, 00:11:04.951 "data_offset": 2048, 00:11:04.951 "data_size": 63488 00:11:04.951 } 00:11:04.951 ] 00:11:04.951 }' 00:11:04.951 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.951 07:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.209 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.209 07:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.469 [2024-11-20 07:07:47.576395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.405 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.406 "name": "raid_bdev1", 00:11:06.406 "uuid": "2ad90616-61e0-4be2-8b9d-646d4e862e02", 00:11:06.406 "strip_size_kb": 64, 00:11:06.406 "state": "online", 00:11:06.406 "raid_level": "concat", 00:11:06.406 "superblock": true, 00:11:06.406 "num_base_bdevs": 2, 00:11:06.406 "num_base_bdevs_discovered": 2, 00:11:06.406 "num_base_bdevs_operational": 2, 00:11:06.406 "base_bdevs_list": [ 00:11:06.406 { 00:11:06.406 "name": "BaseBdev1", 00:11:06.406 "uuid": "545923b2-d131-52a7-9cc4-39308f1e1202", 00:11:06.406 "is_configured": true, 00:11:06.406 "data_offset": 2048, 00:11:06.406 "data_size": 63488 00:11:06.406 }, 00:11:06.406 { 00:11:06.406 "name": "BaseBdev2", 00:11:06.406 "uuid": "d8825d1b-8b18-59c7-afa2-967552e28efb", 00:11:06.406 "is_configured": true, 00:11:06.406 "data_offset": 2048, 00:11:06.406 "data_size": 63488 00:11:06.406 } 00:11:06.406 ] 00:11:06.406 }' 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.406 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 [2024-11-20 07:07:48.947153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.974 [2024-11-20 07:07:48.947320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.974 [2024-11-20 07:07:48.950482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.974 [2024-11-20 07:07:48.950611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.974 [2024-11-20 07:07:48.950673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.974 [2024-11-20 07:07:48.950732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:06.974 { 00:11:06.974 "results": [ 00:11:06.974 { 00:11:06.974 "job": "raid_bdev1", 00:11:06.974 "core_mask": "0x1", 00:11:06.974 "workload": "randrw", 00:11:06.974 "percentage": 50, 00:11:06.974 "status": "finished", 00:11:06.974 "queue_depth": 1, 00:11:06.974 "io_size": 131072, 00:11:06.974 "runtime": 1.371138, 00:11:06.974 "iops": 11567.034098682992, 00:11:06.974 "mibps": 1445.879262335374, 00:11:06.974 "io_failed": 1, 00:11:06.974 "io_timeout": 0, 00:11:06.974 "avg_latency_us": 121.55564787871931, 00:11:06.974 "min_latency_us": 31.748471615720526, 00:11:06.974 "max_latency_us": 1845.8829694323144 00:11:06.974 } 00:11:06.974 ], 00:11:06.974 "core_count": 1 00:11:06.974 } 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62698 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62698 ']' 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62698 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62698 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62698' 00:11:06.974 killing process with pid 62698 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62698 00:11:06.974 [2024-11-20 07:07:48.992965] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.974 07:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62698 00:11:06.974 [2024-11-20 07:07:49.175459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UUFMkmsBfC 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:08.878 00:11:08.878 real 0m4.804s 00:11:08.878 user 0m5.637s 00:11:08.878 sys 0m0.653s 00:11:08.878 ************************************ 00:11:08.878 END TEST raid_read_error_test 00:11:08.878 ************************************ 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.878 07:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.878 07:07:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:08.878 07:07:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.878 07:07:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.878 07:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.878 ************************************ 00:11:08.878 START TEST raid_write_error_test 00:11:08.878 ************************************ 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:08.878 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mQegJNEX7Q 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62839 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62839 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62839 ']' 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.879 07:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.879 [2024-11-20 07:07:50.865966] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:08.879 [2024-11-20 07:07:50.866243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:11:08.879 [2024-11-20 07:07:51.052963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.138 [2024-11-20 07:07:51.210791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.398 [2024-11-20 07:07:51.494227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.398 [2024-11-20 07:07:51.494466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.657 BaseBdev1_malloc 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.657 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 true 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 [2024-11-20 07:07:51.800081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.658 [2024-11-20 07:07:51.800302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.658 [2024-11-20 07:07:51.800385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.658 [2024-11-20 07:07:51.800434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.658 [2024-11-20 07:07:51.803499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.658 [2024-11-20 07:07:51.803615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.658 BaseBdev1 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 BaseBdev2_malloc 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 true 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 [2024-11-20 07:07:51.869425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.658 [2024-11-20 07:07:51.869518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.658 [2024-11-20 07:07:51.869546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.658 [2024-11-20 07:07:51.869561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.658 [2024-11-20 07:07:51.872618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.658 [2024-11-20 07:07:51.872672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.658 BaseBdev2 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 [2024-11-20 07:07:51.877512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.658 [2024-11-20 07:07:51.880098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.658 [2024-11-20 07:07:51.880551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.658 [2024-11-20 07:07:51.880580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:09.658 [2024-11-20 07:07:51.880970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:09.658 [2024-11-20 07:07:51.881206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.658 [2024-11-20 07:07:51.881222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:09.658 [2024-11-20 07:07:51.881569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.918 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.918 "name": "raid_bdev1", 00:11:09.918 "uuid": "a195b0f6-0037-4d25-8d9f-69aa07020d91", 00:11:09.918 "strip_size_kb": 64, 00:11:09.918 "state": "online", 00:11:09.918 "raid_level": "concat", 00:11:09.918 "superblock": true, 00:11:09.918 "num_base_bdevs": 2, 00:11:09.918 "num_base_bdevs_discovered": 2, 00:11:09.918 "num_base_bdevs_operational": 2, 00:11:09.918 "base_bdevs_list": [ 00:11:09.918 { 00:11:09.918 "name": "BaseBdev1", 00:11:09.918 "uuid": "b7ac23fb-8ff9-5c59-9533-b52ef18c5ae4", 00:11:09.918 "is_configured": true, 00:11:09.918 "data_offset": 2048, 00:11:09.918 "data_size": 63488 00:11:09.918 }, 00:11:09.918 { 00:11:09.918 "name": "BaseBdev2", 00:11:09.918 "uuid": "1bd1ee54-c37b-5045-8788-e44c827447f8", 00:11:09.918 "is_configured": true, 00:11:09.918 "data_offset": 2048, 00:11:09.918 "data_size": 63488 00:11:09.918 } 00:11:09.918 ] 00:11:09.918 }' 00:11:09.918 07:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.918 07:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.179 07:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.179 07:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.438 [2024-11-20 07:07:52.458314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.379 "name": "raid_bdev1", 00:11:11.379 "uuid": "a195b0f6-0037-4d25-8d9f-69aa07020d91", 00:11:11.379 "strip_size_kb": 64, 00:11:11.379 "state": "online", 00:11:11.379 "raid_level": "concat", 00:11:11.379 "superblock": true, 00:11:11.379 "num_base_bdevs": 2, 00:11:11.379 "num_base_bdevs_discovered": 2, 00:11:11.379 "num_base_bdevs_operational": 2, 00:11:11.379 "base_bdevs_list": [ 00:11:11.379 { 00:11:11.379 "name": "BaseBdev1", 00:11:11.379 "uuid": "b7ac23fb-8ff9-5c59-9533-b52ef18c5ae4", 00:11:11.379 "is_configured": true, 00:11:11.379 "data_offset": 2048, 00:11:11.379 "data_size": 63488 00:11:11.379 }, 00:11:11.379 { 00:11:11.379 "name": "BaseBdev2", 00:11:11.379 "uuid": "1bd1ee54-c37b-5045-8788-e44c827447f8", 00:11:11.379 "is_configured": true, 00:11:11.379 "data_offset": 2048, 00:11:11.379 "data_size": 63488 00:11:11.379 } 00:11:11.379 ] 00:11:11.379 }' 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.379 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.639 [2024-11-20 07:07:53.796996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.639 [2024-11-20 07:07:53.797177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.639 [2024-11-20 07:07:53.800692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.639 [2024-11-20 07:07:53.800833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.639 [2024-11-20 07:07:53.800899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.639 [2024-11-20 07:07:53.800994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:11.639 { 00:11:11.639 "results": [ 00:11:11.639 { 00:11:11.639 "job": "raid_bdev1", 00:11:11.639 "core_mask": "0x1", 00:11:11.639 "workload": "randrw", 00:11:11.639 "percentage": 50, 00:11:11.639 "status": "finished", 00:11:11.639 "queue_depth": 1, 00:11:11.639 "io_size": 131072, 00:11:11.639 "runtime": 1.338984, 00:11:11.639 "iops": 11662.57401133994, 00:11:11.639 "mibps": 1457.8217514174926, 00:11:11.639 "io_failed": 1, 00:11:11.639 "io_timeout": 0, 00:11:11.639 "avg_latency_us": 120.48111002090712, 00:11:11.639 "min_latency_us": 31.972052401746726, 00:11:11.639 "max_latency_us": 1702.7912663755458 00:11:11.639 } 00:11:11.639 ], 00:11:11.639 "core_count": 1 00:11:11.639 } 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62839 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62839 ']' 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62839 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62839 00:11:11.639 killing process with pid 62839 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62839' 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62839 00:11:11.639 [2024-11-20 07:07:53.841532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.639 07:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62839 00:11:11.898 [2024-11-20 07:07:54.016466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mQegJNEX7Q 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:13.277 00:11:13.277 real 0m4.718s 00:11:13.277 user 0m5.552s 00:11:13.277 sys 0m0.640s 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.277 ************************************ 00:11:13.277 END TEST raid_write_error_test 00:11:13.277 ************************************ 00:11:13.277 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.277 07:07:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:13.277 07:07:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:13.277 07:07:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.277 07:07:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.277 07:07:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.277 ************************************ 00:11:13.277 START TEST raid_state_function_test 00:11:13.277 ************************************ 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.277 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62987 00:11:13.278 Process raid pid: 62987 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62987' 00:11:13.278 07:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62987 00:11:13.537 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62987 ']' 00:11:13.538 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.538 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.538 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.538 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.538 07:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.538 [2024-11-20 07:07:55.626693] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:13.538 [2024-11-20 07:07:55.626914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.797 [2024-11-20 07:07:55.805031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.797 [2024-11-20 07:07:55.964514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.059 [2024-11-20 07:07:56.221112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.059 [2024-11-20 07:07:56.221177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.337 [2024-11-20 07:07:56.499815] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.337 [2024-11-20 07:07:56.499899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.337 [2024-11-20 07:07:56.499912] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.337 [2024-11-20 07:07:56.499923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.337 "name": "Existed_Raid", 00:11:14.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.337 "strip_size_kb": 0, 00:11:14.337 "state": "configuring", 00:11:14.337 "raid_level": "raid1", 00:11:14.337 "superblock": false, 00:11:14.337 "num_base_bdevs": 2, 00:11:14.337 "num_base_bdevs_discovered": 0, 00:11:14.337 "num_base_bdevs_operational": 2, 00:11:14.337 "base_bdevs_list": [ 00:11:14.337 { 00:11:14.337 "name": "BaseBdev1", 00:11:14.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.337 "is_configured": false, 00:11:14.337 "data_offset": 0, 00:11:14.337 "data_size": 0 00:11:14.337 }, 00:11:14.337 { 00:11:14.337 "name": "BaseBdev2", 00:11:14.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.337 "is_configured": false, 00:11:14.337 "data_offset": 0, 00:11:14.337 "data_size": 0 00:11:14.337 } 00:11:14.337 ] 00:11:14.337 }' 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.337 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 [2024-11-20 07:07:56.986950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.904 [2024-11-20 07:07:56.987109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.904 07:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 [2024-11-20 07:07:56.998917] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.904 [2024-11-20 07:07:56.999060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.904 [2024-11-20 07:07:56.999102] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.904 [2024-11-20 07:07:56.999144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.904 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.904 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.904 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.904 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 [2024-11-20 07:07:57.057326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.904 BaseBdev1 00:11:14.904 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.905 [ 00:11:14.905 { 00:11:14.905 "name": "BaseBdev1", 00:11:14.905 "aliases": [ 00:11:14.905 "4fa57b9b-de93-4090-92f5-03a080a176b3" 00:11:14.905 ], 00:11:14.905 "product_name": "Malloc disk", 00:11:14.905 "block_size": 512, 00:11:14.905 "num_blocks": 65536, 00:11:14.905 "uuid": "4fa57b9b-de93-4090-92f5-03a080a176b3", 00:11:14.905 "assigned_rate_limits": { 00:11:14.905 "rw_ios_per_sec": 0, 00:11:14.905 "rw_mbytes_per_sec": 0, 00:11:14.905 "r_mbytes_per_sec": 0, 00:11:14.905 "w_mbytes_per_sec": 0 00:11:14.905 }, 00:11:14.905 "claimed": true, 00:11:14.905 "claim_type": "exclusive_write", 00:11:14.905 "zoned": false, 00:11:14.905 "supported_io_types": { 00:11:14.905 "read": true, 00:11:14.905 "write": true, 00:11:14.905 "unmap": true, 00:11:14.905 "flush": true, 00:11:14.905 "reset": true, 00:11:14.905 "nvme_admin": false, 00:11:14.905 "nvme_io": false, 00:11:14.905 "nvme_io_md": false, 00:11:14.905 "write_zeroes": true, 00:11:14.905 "zcopy": true, 00:11:14.905 "get_zone_info": false, 00:11:14.905 "zone_management": false, 00:11:14.905 "zone_append": false, 00:11:14.905 "compare": false, 00:11:14.905 "compare_and_write": false, 00:11:14.905 "abort": true, 00:11:14.905 "seek_hole": false, 00:11:14.905 "seek_data": false, 00:11:14.905 "copy": true, 00:11:14.905 "nvme_iov_md": false 00:11:14.905 }, 00:11:14.905 "memory_domains": [ 00:11:14.905 { 00:11:14.905 "dma_device_id": "system", 00:11:14.905 "dma_device_type": 1 00:11:14.905 }, 00:11:14.905 { 00:11:14.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.905 "dma_device_type": 2 00:11:14.905 } 00:11:14.905 ], 00:11:14.905 "driver_specific": {} 00:11:14.905 } 00:11:14.905 ] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.905 "name": "Existed_Raid", 00:11:14.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.905 "strip_size_kb": 0, 00:11:14.905 "state": "configuring", 00:11:14.905 "raid_level": "raid1", 00:11:14.905 "superblock": false, 00:11:14.905 "num_base_bdevs": 2, 00:11:14.905 "num_base_bdevs_discovered": 1, 00:11:14.905 "num_base_bdevs_operational": 2, 00:11:14.905 "base_bdevs_list": [ 00:11:14.905 { 00:11:14.905 "name": "BaseBdev1", 00:11:14.905 "uuid": "4fa57b9b-de93-4090-92f5-03a080a176b3", 00:11:14.905 "is_configured": true, 00:11:14.905 "data_offset": 0, 00:11:14.905 "data_size": 65536 00:11:14.905 }, 00:11:14.905 { 00:11:14.905 "name": "BaseBdev2", 00:11:14.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.905 "is_configured": false, 00:11:14.905 "data_offset": 0, 00:11:14.905 "data_size": 0 00:11:14.905 } 00:11:14.905 ] 00:11:14.905 }' 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.905 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 [2024-11-20 07:07:57.584552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.473 [2024-11-20 07:07:57.584748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 [2024-11-20 07:07:57.596579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.473 [2024-11-20 07:07:57.598853] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.473 [2024-11-20 07:07:57.598909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.473 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.474 "name": "Existed_Raid", 00:11:15.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.474 "strip_size_kb": 0, 00:11:15.474 "state": "configuring", 00:11:15.474 "raid_level": "raid1", 00:11:15.474 "superblock": false, 00:11:15.474 "num_base_bdevs": 2, 00:11:15.474 "num_base_bdevs_discovered": 1, 00:11:15.474 "num_base_bdevs_operational": 2, 00:11:15.474 "base_bdevs_list": [ 00:11:15.474 { 00:11:15.474 "name": "BaseBdev1", 00:11:15.474 "uuid": "4fa57b9b-de93-4090-92f5-03a080a176b3", 00:11:15.474 "is_configured": true, 00:11:15.474 "data_offset": 0, 00:11:15.474 "data_size": 65536 00:11:15.474 }, 00:11:15.474 { 00:11:15.474 "name": "BaseBdev2", 00:11:15.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.474 "is_configured": false, 00:11:15.474 "data_offset": 0, 00:11:15.474 "data_size": 0 00:11:15.474 } 00:11:15.474 ] 00:11:15.474 }' 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.474 07:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.044 [2024-11-20 07:07:58.145231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.044 [2024-11-20 07:07:58.145443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.044 [2024-11-20 07:07:58.145478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:16.044 [2024-11-20 07:07:58.145901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:16.044 [2024-11-20 07:07:58.146172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.044 [2024-11-20 07:07:58.146227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.044 [2024-11-20 07:07:58.146615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.044 BaseBdev2 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.044 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.044 [ 00:11:16.044 { 00:11:16.044 "name": "BaseBdev2", 00:11:16.044 "aliases": [ 00:11:16.044 "9aaf6933-41e4-4c8c-9ec3-5079304559f3" 00:11:16.044 ], 00:11:16.044 "product_name": "Malloc disk", 00:11:16.044 "block_size": 512, 00:11:16.044 "num_blocks": 65536, 00:11:16.044 "uuid": "9aaf6933-41e4-4c8c-9ec3-5079304559f3", 00:11:16.044 "assigned_rate_limits": { 00:11:16.044 "rw_ios_per_sec": 0, 00:11:16.044 "rw_mbytes_per_sec": 0, 00:11:16.044 "r_mbytes_per_sec": 0, 00:11:16.044 "w_mbytes_per_sec": 0 00:11:16.044 }, 00:11:16.044 "claimed": true, 00:11:16.044 "claim_type": "exclusive_write", 00:11:16.044 "zoned": false, 00:11:16.044 "supported_io_types": { 00:11:16.044 "read": true, 00:11:16.044 "write": true, 00:11:16.044 "unmap": true, 00:11:16.044 "flush": true, 00:11:16.044 "reset": true, 00:11:16.044 "nvme_admin": false, 00:11:16.044 "nvme_io": false, 00:11:16.044 "nvme_io_md": false, 00:11:16.044 "write_zeroes": true, 00:11:16.044 "zcopy": true, 00:11:16.044 "get_zone_info": false, 00:11:16.044 "zone_management": false, 00:11:16.044 "zone_append": false, 00:11:16.044 "compare": false, 00:11:16.044 "compare_and_write": false, 00:11:16.044 "abort": true, 00:11:16.044 "seek_hole": false, 00:11:16.044 "seek_data": false, 00:11:16.044 "copy": true, 00:11:16.044 "nvme_iov_md": false 00:11:16.044 }, 00:11:16.044 "memory_domains": [ 00:11:16.044 { 00:11:16.044 "dma_device_id": "system", 00:11:16.044 "dma_device_type": 1 00:11:16.044 }, 00:11:16.044 { 00:11:16.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.044 "dma_device_type": 2 00:11:16.044 } 00:11:16.045 ], 00:11:16.045 "driver_specific": {} 00:11:16.045 } 00:11:16.045 ] 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.045 "name": "Existed_Raid", 00:11:16.045 "uuid": "f2bd12ca-4599-47d4-b270-c9778347fbef", 00:11:16.045 "strip_size_kb": 0, 00:11:16.045 "state": "online", 00:11:16.045 "raid_level": "raid1", 00:11:16.045 "superblock": false, 00:11:16.045 "num_base_bdevs": 2, 00:11:16.045 "num_base_bdevs_discovered": 2, 00:11:16.045 "num_base_bdevs_operational": 2, 00:11:16.045 "base_bdevs_list": [ 00:11:16.045 { 00:11:16.045 "name": "BaseBdev1", 00:11:16.045 "uuid": "4fa57b9b-de93-4090-92f5-03a080a176b3", 00:11:16.045 "is_configured": true, 00:11:16.045 "data_offset": 0, 00:11:16.045 "data_size": 65536 00:11:16.045 }, 00:11:16.045 { 00:11:16.045 "name": "BaseBdev2", 00:11:16.045 "uuid": "9aaf6933-41e4-4c8c-9ec3-5079304559f3", 00:11:16.045 "is_configured": true, 00:11:16.045 "data_offset": 0, 00:11:16.045 "data_size": 65536 00:11:16.045 } 00:11:16.045 ] 00:11:16.045 }' 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.045 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.612 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.613 [2024-11-20 07:07:58.616830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.613 "name": "Existed_Raid", 00:11:16.613 "aliases": [ 00:11:16.613 "f2bd12ca-4599-47d4-b270-c9778347fbef" 00:11:16.613 ], 00:11:16.613 "product_name": "Raid Volume", 00:11:16.613 "block_size": 512, 00:11:16.613 "num_blocks": 65536, 00:11:16.613 "uuid": "f2bd12ca-4599-47d4-b270-c9778347fbef", 00:11:16.613 "assigned_rate_limits": { 00:11:16.613 "rw_ios_per_sec": 0, 00:11:16.613 "rw_mbytes_per_sec": 0, 00:11:16.613 "r_mbytes_per_sec": 0, 00:11:16.613 "w_mbytes_per_sec": 0 00:11:16.613 }, 00:11:16.613 "claimed": false, 00:11:16.613 "zoned": false, 00:11:16.613 "supported_io_types": { 00:11:16.613 "read": true, 00:11:16.613 "write": true, 00:11:16.613 "unmap": false, 00:11:16.613 "flush": false, 00:11:16.613 "reset": true, 00:11:16.613 "nvme_admin": false, 00:11:16.613 "nvme_io": false, 00:11:16.613 "nvme_io_md": false, 00:11:16.613 "write_zeroes": true, 00:11:16.613 "zcopy": false, 00:11:16.613 "get_zone_info": false, 00:11:16.613 "zone_management": false, 00:11:16.613 "zone_append": false, 00:11:16.613 "compare": false, 00:11:16.613 "compare_and_write": false, 00:11:16.613 "abort": false, 00:11:16.613 "seek_hole": false, 00:11:16.613 "seek_data": false, 00:11:16.613 "copy": false, 00:11:16.613 "nvme_iov_md": false 00:11:16.613 }, 00:11:16.613 "memory_domains": [ 00:11:16.613 { 00:11:16.613 "dma_device_id": "system", 00:11:16.613 "dma_device_type": 1 00:11:16.613 }, 00:11:16.613 { 00:11:16.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.613 "dma_device_type": 2 00:11:16.613 }, 00:11:16.613 { 00:11:16.613 "dma_device_id": "system", 00:11:16.613 "dma_device_type": 1 00:11:16.613 }, 00:11:16.613 { 00:11:16.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.613 "dma_device_type": 2 00:11:16.613 } 00:11:16.613 ], 00:11:16.613 "driver_specific": { 00:11:16.613 "raid": { 00:11:16.613 "uuid": "f2bd12ca-4599-47d4-b270-c9778347fbef", 00:11:16.613 "strip_size_kb": 0, 00:11:16.613 "state": "online", 00:11:16.613 "raid_level": "raid1", 00:11:16.613 "superblock": false, 00:11:16.613 "num_base_bdevs": 2, 00:11:16.613 "num_base_bdevs_discovered": 2, 00:11:16.613 "num_base_bdevs_operational": 2, 00:11:16.613 "base_bdevs_list": [ 00:11:16.613 { 00:11:16.613 "name": "BaseBdev1", 00:11:16.613 "uuid": "4fa57b9b-de93-4090-92f5-03a080a176b3", 00:11:16.613 "is_configured": true, 00:11:16.613 "data_offset": 0, 00:11:16.613 "data_size": 65536 00:11:16.613 }, 00:11:16.613 { 00:11:16.613 "name": "BaseBdev2", 00:11:16.613 "uuid": "9aaf6933-41e4-4c8c-9ec3-5079304559f3", 00:11:16.613 "is_configured": true, 00:11:16.613 "data_offset": 0, 00:11:16.613 "data_size": 65536 00:11:16.613 } 00:11:16.613 ] 00:11:16.613 } 00:11:16.613 } 00:11:16.613 }' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.613 BaseBdev2' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.613 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.613 [2024-11-20 07:07:58.840596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.874 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.875 07:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.875 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.875 "name": "Existed_Raid", 00:11:16.875 "uuid": "f2bd12ca-4599-47d4-b270-c9778347fbef", 00:11:16.875 "strip_size_kb": 0, 00:11:16.875 "state": "online", 00:11:16.875 "raid_level": "raid1", 00:11:16.875 "superblock": false, 00:11:16.875 "num_base_bdevs": 2, 00:11:16.875 "num_base_bdevs_discovered": 1, 00:11:16.875 "num_base_bdevs_operational": 1, 00:11:16.875 "base_bdevs_list": [ 00:11:16.875 { 00:11:16.875 "name": null, 00:11:16.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.875 "is_configured": false, 00:11:16.875 "data_offset": 0, 00:11:16.875 "data_size": 65536 00:11:16.875 }, 00:11:16.875 { 00:11:16.875 "name": "BaseBdev2", 00:11:16.875 "uuid": "9aaf6933-41e4-4c8c-9ec3-5079304559f3", 00:11:16.875 "is_configured": true, 00:11:16.875 "data_offset": 0, 00:11:16.875 "data_size": 65536 00:11:16.875 } 00:11:16.875 ] 00:11:16.875 }' 00:11:16.875 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.875 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.448 [2024-11-20 07:07:59.453702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.448 [2024-11-20 07:07:59.453933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.448 [2024-11-20 07:07:59.563626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.448 [2024-11-20 07:07:59.563711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.448 [2024-11-20 07:07:59.563725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62987 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62987 ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62987 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62987 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62987' 00:11:17.448 killing process with pid 62987 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62987 00:11:17.448 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62987 00:11:17.448 [2024-11-20 07:07:59.656095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.448 [2024-11-20 07:07:59.674501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.825 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:18.825 ************************************ 00:11:18.825 END TEST raid_state_function_test 00:11:18.825 ************************************ 00:11:18.825 00:11:18.825 real 0m5.445s 00:11:18.825 user 0m7.695s 00:11:18.825 sys 0m0.959s 00:11:18.825 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.825 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.825 07:08:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:18.825 07:08:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.825 07:08:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.825 07:08:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.825 ************************************ 00:11:18.825 START TEST raid_state_function_test_sb 00:11:18.825 ************************************ 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:18.825 Process raid pid: 63241 00:11:18.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63241 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63241' 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63241 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63241 ']' 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.825 07:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.085 [2024-11-20 07:08:01.135837] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:19.085 [2024-11-20 07:08:01.135971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.085 [2024-11-20 07:08:01.314219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.344 [2024-11-20 07:08:01.461764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.603 [2024-11-20 07:08:01.736642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.604 [2024-11-20 07:08:01.736827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.864 [2024-11-20 07:08:02.013583] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.864 [2024-11-20 07:08:02.013668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.864 [2024-11-20 07:08:02.013682] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.864 [2024-11-20 07:08:02.013694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.864 "name": "Existed_Raid", 00:11:19.864 "uuid": "5a6dc020-4371-41a4-8404-caadbd6bf073", 00:11:19.864 "strip_size_kb": 0, 00:11:19.864 "state": "configuring", 00:11:19.864 "raid_level": "raid1", 00:11:19.864 "superblock": true, 00:11:19.864 "num_base_bdevs": 2, 00:11:19.864 "num_base_bdevs_discovered": 0, 00:11:19.864 "num_base_bdevs_operational": 2, 00:11:19.864 "base_bdevs_list": [ 00:11:19.864 { 00:11:19.864 "name": "BaseBdev1", 00:11:19.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.864 "is_configured": false, 00:11:19.864 "data_offset": 0, 00:11:19.864 "data_size": 0 00:11:19.864 }, 00:11:19.864 { 00:11:19.864 "name": "BaseBdev2", 00:11:19.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.864 "is_configured": false, 00:11:19.864 "data_offset": 0, 00:11:19.864 "data_size": 0 00:11:19.864 } 00:11:19.864 ] 00:11:19.864 }' 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.864 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 [2024-11-20 07:08:02.460740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.433 [2024-11-20 07:08:02.460896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 [2024-11-20 07:08:02.468700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.433 [2024-11-20 07:08:02.468761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.433 [2024-11-20 07:08:02.468771] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.433 [2024-11-20 07:08:02.468786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 [2024-11-20 07:08:02.524213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.433 BaseBdev1 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.433 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 [ 00:11:20.433 { 00:11:20.433 "name": "BaseBdev1", 00:11:20.433 "aliases": [ 00:11:20.433 "748ad39b-bacf-41f8-bd8d-7f384981da23" 00:11:20.433 ], 00:11:20.433 "product_name": "Malloc disk", 00:11:20.433 "block_size": 512, 00:11:20.433 "num_blocks": 65536, 00:11:20.433 "uuid": "748ad39b-bacf-41f8-bd8d-7f384981da23", 00:11:20.433 "assigned_rate_limits": { 00:11:20.433 "rw_ios_per_sec": 0, 00:11:20.433 "rw_mbytes_per_sec": 0, 00:11:20.433 "r_mbytes_per_sec": 0, 00:11:20.433 "w_mbytes_per_sec": 0 00:11:20.433 }, 00:11:20.433 "claimed": true, 00:11:20.433 "claim_type": "exclusive_write", 00:11:20.433 "zoned": false, 00:11:20.433 "supported_io_types": { 00:11:20.433 "read": true, 00:11:20.433 "write": true, 00:11:20.433 "unmap": true, 00:11:20.433 "flush": true, 00:11:20.433 "reset": true, 00:11:20.433 "nvme_admin": false, 00:11:20.433 "nvme_io": false, 00:11:20.433 "nvme_io_md": false, 00:11:20.433 "write_zeroes": true, 00:11:20.433 "zcopy": true, 00:11:20.433 "get_zone_info": false, 00:11:20.433 "zone_management": false, 00:11:20.433 "zone_append": false, 00:11:20.433 "compare": false, 00:11:20.433 "compare_and_write": false, 00:11:20.433 "abort": true, 00:11:20.434 "seek_hole": false, 00:11:20.434 "seek_data": false, 00:11:20.434 "copy": true, 00:11:20.434 "nvme_iov_md": false 00:11:20.434 }, 00:11:20.434 "memory_domains": [ 00:11:20.434 { 00:11:20.434 "dma_device_id": "system", 00:11:20.434 "dma_device_type": 1 00:11:20.434 }, 00:11:20.434 { 00:11:20.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.434 "dma_device_type": 2 00:11:20.434 } 00:11:20.434 ], 00:11:20.434 "driver_specific": {} 00:11:20.434 } 00:11:20.434 ] 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.434 "name": "Existed_Raid", 00:11:20.434 "uuid": "919fa16f-8479-412b-b97b-96303c758ab2", 00:11:20.434 "strip_size_kb": 0, 00:11:20.434 "state": "configuring", 00:11:20.434 "raid_level": "raid1", 00:11:20.434 "superblock": true, 00:11:20.434 "num_base_bdevs": 2, 00:11:20.434 "num_base_bdevs_discovered": 1, 00:11:20.434 "num_base_bdevs_operational": 2, 00:11:20.434 "base_bdevs_list": [ 00:11:20.434 { 00:11:20.434 "name": "BaseBdev1", 00:11:20.434 "uuid": "748ad39b-bacf-41f8-bd8d-7f384981da23", 00:11:20.434 "is_configured": true, 00:11:20.434 "data_offset": 2048, 00:11:20.434 "data_size": 63488 00:11:20.434 }, 00:11:20.434 { 00:11:20.434 "name": "BaseBdev2", 00:11:20.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.434 "is_configured": false, 00:11:20.434 "data_offset": 0, 00:11:20.434 "data_size": 0 00:11:20.434 } 00:11:20.434 ] 00:11:20.434 }' 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.434 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.001 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.001 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.001 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.001 [2024-11-20 07:08:02.995544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.002 [2024-11-20 07:08:02.995719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.002 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.002 07:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:21.002 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.002 07:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.002 [2024-11-20 07:08:03.003585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.002 [2024-11-20 07:08:03.006010] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.002 [2024-11-20 07:08:03.006104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.002 "name": "Existed_Raid", 00:11:21.002 "uuid": "2acffd5b-254b-4691-87ae-e677146d1a78", 00:11:21.002 "strip_size_kb": 0, 00:11:21.002 "state": "configuring", 00:11:21.002 "raid_level": "raid1", 00:11:21.002 "superblock": true, 00:11:21.002 "num_base_bdevs": 2, 00:11:21.002 "num_base_bdevs_discovered": 1, 00:11:21.002 "num_base_bdevs_operational": 2, 00:11:21.002 "base_bdevs_list": [ 00:11:21.002 { 00:11:21.002 "name": "BaseBdev1", 00:11:21.002 "uuid": "748ad39b-bacf-41f8-bd8d-7f384981da23", 00:11:21.002 "is_configured": true, 00:11:21.002 "data_offset": 2048, 00:11:21.002 "data_size": 63488 00:11:21.002 }, 00:11:21.002 { 00:11:21.002 "name": "BaseBdev2", 00:11:21.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.002 "is_configured": false, 00:11:21.002 "data_offset": 0, 00:11:21.002 "data_size": 0 00:11:21.002 } 00:11:21.002 ] 00:11:21.002 }' 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.002 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.261 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.261 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.261 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.520 [2024-11-20 07:08:03.541594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.520 [2024-11-20 07:08:03.542029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.520 [2024-11-20 07:08:03.542088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.520 BaseBdev2 00:11:21.520 [2024-11-20 07:08:03.542469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:21.520 [2024-11-20 07:08:03.542728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.520 [2024-11-20 07:08:03.542779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.520 [2024-11-20 07:08:03.542963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.520 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.520 [ 00:11:21.520 { 00:11:21.520 "name": "BaseBdev2", 00:11:21.520 "aliases": [ 00:11:21.520 "447ec5e4-e6a7-4ae0-829a-e025e91307ce" 00:11:21.520 ], 00:11:21.520 "product_name": "Malloc disk", 00:11:21.520 "block_size": 512, 00:11:21.520 "num_blocks": 65536, 00:11:21.520 "uuid": "447ec5e4-e6a7-4ae0-829a-e025e91307ce", 00:11:21.520 "assigned_rate_limits": { 00:11:21.520 "rw_ios_per_sec": 0, 00:11:21.520 "rw_mbytes_per_sec": 0, 00:11:21.520 "r_mbytes_per_sec": 0, 00:11:21.520 "w_mbytes_per_sec": 0 00:11:21.520 }, 00:11:21.520 "claimed": true, 00:11:21.520 "claim_type": "exclusive_write", 00:11:21.520 "zoned": false, 00:11:21.520 "supported_io_types": { 00:11:21.520 "read": true, 00:11:21.520 "write": true, 00:11:21.520 "unmap": true, 00:11:21.520 "flush": true, 00:11:21.520 "reset": true, 00:11:21.520 "nvme_admin": false, 00:11:21.520 "nvme_io": false, 00:11:21.520 "nvme_io_md": false, 00:11:21.520 "write_zeroes": true, 00:11:21.520 "zcopy": true, 00:11:21.520 "get_zone_info": false, 00:11:21.520 "zone_management": false, 00:11:21.520 "zone_append": false, 00:11:21.521 "compare": false, 00:11:21.521 "compare_and_write": false, 00:11:21.521 "abort": true, 00:11:21.521 "seek_hole": false, 00:11:21.521 "seek_data": false, 00:11:21.521 "copy": true, 00:11:21.521 "nvme_iov_md": false 00:11:21.521 }, 00:11:21.521 "memory_domains": [ 00:11:21.521 { 00:11:21.521 "dma_device_id": "system", 00:11:21.521 "dma_device_type": 1 00:11:21.521 }, 00:11:21.521 { 00:11:21.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.521 "dma_device_type": 2 00:11:21.521 } 00:11:21.521 ], 00:11:21.521 "driver_specific": {} 00:11:21.521 } 00:11:21.521 ] 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.521 "name": "Existed_Raid", 00:11:21.521 "uuid": "2acffd5b-254b-4691-87ae-e677146d1a78", 00:11:21.521 "strip_size_kb": 0, 00:11:21.521 "state": "online", 00:11:21.521 "raid_level": "raid1", 00:11:21.521 "superblock": true, 00:11:21.521 "num_base_bdevs": 2, 00:11:21.521 "num_base_bdevs_discovered": 2, 00:11:21.521 "num_base_bdevs_operational": 2, 00:11:21.521 "base_bdevs_list": [ 00:11:21.521 { 00:11:21.521 "name": "BaseBdev1", 00:11:21.521 "uuid": "748ad39b-bacf-41f8-bd8d-7f384981da23", 00:11:21.521 "is_configured": true, 00:11:21.521 "data_offset": 2048, 00:11:21.521 "data_size": 63488 00:11:21.521 }, 00:11:21.521 { 00:11:21.521 "name": "BaseBdev2", 00:11:21.521 "uuid": "447ec5e4-e6a7-4ae0-829a-e025e91307ce", 00:11:21.521 "is_configured": true, 00:11:21.521 "data_offset": 2048, 00:11:21.521 "data_size": 63488 00:11:21.521 } 00:11:21.521 ] 00:11:21.521 }' 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.521 07:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.781 [2024-11-20 07:08:04.021216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.781 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.041 "name": "Existed_Raid", 00:11:22.041 "aliases": [ 00:11:22.041 "2acffd5b-254b-4691-87ae-e677146d1a78" 00:11:22.041 ], 00:11:22.041 "product_name": "Raid Volume", 00:11:22.041 "block_size": 512, 00:11:22.041 "num_blocks": 63488, 00:11:22.041 "uuid": "2acffd5b-254b-4691-87ae-e677146d1a78", 00:11:22.041 "assigned_rate_limits": { 00:11:22.041 "rw_ios_per_sec": 0, 00:11:22.041 "rw_mbytes_per_sec": 0, 00:11:22.041 "r_mbytes_per_sec": 0, 00:11:22.041 "w_mbytes_per_sec": 0 00:11:22.041 }, 00:11:22.041 "claimed": false, 00:11:22.041 "zoned": false, 00:11:22.041 "supported_io_types": { 00:11:22.041 "read": true, 00:11:22.041 "write": true, 00:11:22.041 "unmap": false, 00:11:22.041 "flush": false, 00:11:22.041 "reset": true, 00:11:22.041 "nvme_admin": false, 00:11:22.041 "nvme_io": false, 00:11:22.041 "nvme_io_md": false, 00:11:22.041 "write_zeroes": true, 00:11:22.041 "zcopy": false, 00:11:22.041 "get_zone_info": false, 00:11:22.041 "zone_management": false, 00:11:22.041 "zone_append": false, 00:11:22.041 "compare": false, 00:11:22.041 "compare_and_write": false, 00:11:22.041 "abort": false, 00:11:22.041 "seek_hole": false, 00:11:22.041 "seek_data": false, 00:11:22.041 "copy": false, 00:11:22.041 "nvme_iov_md": false 00:11:22.041 }, 00:11:22.041 "memory_domains": [ 00:11:22.041 { 00:11:22.041 "dma_device_id": "system", 00:11:22.041 "dma_device_type": 1 00:11:22.041 }, 00:11:22.041 { 00:11:22.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.041 "dma_device_type": 2 00:11:22.041 }, 00:11:22.041 { 00:11:22.041 "dma_device_id": "system", 00:11:22.041 "dma_device_type": 1 00:11:22.041 }, 00:11:22.041 { 00:11:22.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.041 "dma_device_type": 2 00:11:22.041 } 00:11:22.041 ], 00:11:22.041 "driver_specific": { 00:11:22.041 "raid": { 00:11:22.041 "uuid": "2acffd5b-254b-4691-87ae-e677146d1a78", 00:11:22.041 "strip_size_kb": 0, 00:11:22.041 "state": "online", 00:11:22.041 "raid_level": "raid1", 00:11:22.041 "superblock": true, 00:11:22.041 "num_base_bdevs": 2, 00:11:22.041 "num_base_bdevs_discovered": 2, 00:11:22.041 "num_base_bdevs_operational": 2, 00:11:22.041 "base_bdevs_list": [ 00:11:22.041 { 00:11:22.041 "name": "BaseBdev1", 00:11:22.041 "uuid": "748ad39b-bacf-41f8-bd8d-7f384981da23", 00:11:22.041 "is_configured": true, 00:11:22.041 "data_offset": 2048, 00:11:22.041 "data_size": 63488 00:11:22.041 }, 00:11:22.041 { 00:11:22.041 "name": "BaseBdev2", 00:11:22.041 "uuid": "447ec5e4-e6a7-4ae0-829a-e025e91307ce", 00:11:22.041 "is_configured": true, 00:11:22.041 "data_offset": 2048, 00:11:22.041 "data_size": 63488 00:11:22.041 } 00:11:22.041 ] 00:11:22.041 } 00:11:22.041 } 00:11:22.041 }' 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:22.041 BaseBdev2' 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.041 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.042 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 [2024-11-20 07:08:04.272586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.301 "name": "Existed_Raid", 00:11:22.301 "uuid": "2acffd5b-254b-4691-87ae-e677146d1a78", 00:11:22.301 "strip_size_kb": 0, 00:11:22.301 "state": "online", 00:11:22.301 "raid_level": "raid1", 00:11:22.301 "superblock": true, 00:11:22.301 "num_base_bdevs": 2, 00:11:22.301 "num_base_bdevs_discovered": 1, 00:11:22.301 "num_base_bdevs_operational": 1, 00:11:22.301 "base_bdevs_list": [ 00:11:22.301 { 00:11:22.301 "name": null, 00:11:22.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.301 "is_configured": false, 00:11:22.301 "data_offset": 0, 00:11:22.301 "data_size": 63488 00:11:22.301 }, 00:11:22.301 { 00:11:22.301 "name": "BaseBdev2", 00:11:22.301 "uuid": "447ec5e4-e6a7-4ae0-829a-e025e91307ce", 00:11:22.301 "is_configured": true, 00:11:22.301 "data_offset": 2048, 00:11:22.301 "data_size": 63488 00:11:22.301 } 00:11:22.301 ] 00:11:22.301 }' 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.301 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.560 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.820 [2024-11-20 07:08:04.868460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.820 [2024-11-20 07:08:04.868721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.820 [2024-11-20 07:08:04.982723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.820 [2024-11-20 07:08:04.982924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.820 [2024-11-20 07:08:04.982976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.820 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63241 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63241 ']' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63241 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63241 00:11:22.820 killing process with pid 63241 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63241' 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63241 00:11:22.820 [2024-11-20 07:08:05.052437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.820 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63241 00:11:22.820 [2024-11-20 07:08:05.072836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.197 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:24.197 00:11:24.197 real 0m5.327s 00:11:24.197 user 0m7.543s 00:11:24.197 sys 0m0.897s 00:11:24.197 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.197 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.197 ************************************ 00:11:24.197 END TEST raid_state_function_test_sb 00:11:24.197 ************************************ 00:11:24.197 07:08:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:24.197 07:08:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.197 07:08:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.197 07:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.197 ************************************ 00:11:24.197 START TEST raid_superblock_test 00:11:24.197 ************************************ 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63501 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63501 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63501 ']' 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.197 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.198 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.198 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.198 07:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.456 [2024-11-20 07:08:06.550137] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:24.456 [2024-11-20 07:08:06.550441] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63501 ] 00:11:24.716 [2024-11-20 07:08:06.730200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.716 [2024-11-20 07:08:06.871972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.975 [2024-11-20 07:08:07.115437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.975 [2024-11-20 07:08:07.115611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.234 malloc1 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.234 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.234 [2024-11-20 07:08:07.496529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.234 [2024-11-20 07:08:07.496735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.234 [2024-11-20 07:08:07.496792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:25.234 [2024-11-20 07:08:07.496825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.493 [2024-11-20 07:08:07.499483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.493 [2024-11-20 07:08:07.499578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.493 pt1 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.493 malloc2 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.493 [2024-11-20 07:08:07.561157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.493 [2024-11-20 07:08:07.561372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.493 [2024-11-20 07:08:07.561407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.493 [2024-11-20 07:08:07.561419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.493 [2024-11-20 07:08:07.564080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.493 [2024-11-20 07:08:07.564124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.493 pt2 00:11:25.493 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.494 [2024-11-20 07:08:07.573214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.494 [2024-11-20 07:08:07.575474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.494 [2024-11-20 07:08:07.575738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:25.494 [2024-11-20 07:08:07.575761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.494 [2024-11-20 07:08:07.576076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:25.494 [2024-11-20 07:08:07.576260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:25.494 [2024-11-20 07:08:07.576276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:25.494 [2024-11-20 07:08:07.576493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.494 "name": "raid_bdev1", 00:11:25.494 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:25.494 "strip_size_kb": 0, 00:11:25.494 "state": "online", 00:11:25.494 "raid_level": "raid1", 00:11:25.494 "superblock": true, 00:11:25.494 "num_base_bdevs": 2, 00:11:25.494 "num_base_bdevs_discovered": 2, 00:11:25.494 "num_base_bdevs_operational": 2, 00:11:25.494 "base_bdevs_list": [ 00:11:25.494 { 00:11:25.494 "name": "pt1", 00:11:25.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.494 "is_configured": true, 00:11:25.494 "data_offset": 2048, 00:11:25.494 "data_size": 63488 00:11:25.494 }, 00:11:25.494 { 00:11:25.494 "name": "pt2", 00:11:25.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.494 "is_configured": true, 00:11:25.494 "data_offset": 2048, 00:11:25.494 "data_size": 63488 00:11:25.494 } 00:11:25.494 ] 00:11:25.494 }' 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.494 07:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.752 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.752 [2024-11-20 07:08:08.012793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.011 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.011 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.011 "name": "raid_bdev1", 00:11:26.011 "aliases": [ 00:11:26.011 "f99c4d35-0e37-40fe-b17a-9e8f70fe8267" 00:11:26.011 ], 00:11:26.011 "product_name": "Raid Volume", 00:11:26.011 "block_size": 512, 00:11:26.011 "num_blocks": 63488, 00:11:26.011 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:26.011 "assigned_rate_limits": { 00:11:26.011 "rw_ios_per_sec": 0, 00:11:26.011 "rw_mbytes_per_sec": 0, 00:11:26.011 "r_mbytes_per_sec": 0, 00:11:26.011 "w_mbytes_per_sec": 0 00:11:26.011 }, 00:11:26.011 "claimed": false, 00:11:26.011 "zoned": false, 00:11:26.011 "supported_io_types": { 00:11:26.011 "read": true, 00:11:26.011 "write": true, 00:11:26.011 "unmap": false, 00:11:26.011 "flush": false, 00:11:26.011 "reset": true, 00:11:26.011 "nvme_admin": false, 00:11:26.011 "nvme_io": false, 00:11:26.011 "nvme_io_md": false, 00:11:26.011 "write_zeroes": true, 00:11:26.011 "zcopy": false, 00:11:26.011 "get_zone_info": false, 00:11:26.011 "zone_management": false, 00:11:26.011 "zone_append": false, 00:11:26.011 "compare": false, 00:11:26.011 "compare_and_write": false, 00:11:26.011 "abort": false, 00:11:26.011 "seek_hole": false, 00:11:26.011 "seek_data": false, 00:11:26.011 "copy": false, 00:11:26.011 "nvme_iov_md": false 00:11:26.011 }, 00:11:26.011 "memory_domains": [ 00:11:26.011 { 00:11:26.011 "dma_device_id": "system", 00:11:26.011 "dma_device_type": 1 00:11:26.011 }, 00:11:26.011 { 00:11:26.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.011 "dma_device_type": 2 00:11:26.011 }, 00:11:26.011 { 00:11:26.011 "dma_device_id": "system", 00:11:26.011 "dma_device_type": 1 00:11:26.011 }, 00:11:26.011 { 00:11:26.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.011 "dma_device_type": 2 00:11:26.011 } 00:11:26.011 ], 00:11:26.011 "driver_specific": { 00:11:26.011 "raid": { 00:11:26.011 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:26.011 "strip_size_kb": 0, 00:11:26.011 "state": "online", 00:11:26.012 "raid_level": "raid1", 00:11:26.012 "superblock": true, 00:11:26.012 "num_base_bdevs": 2, 00:11:26.012 "num_base_bdevs_discovered": 2, 00:11:26.012 "num_base_bdevs_operational": 2, 00:11:26.012 "base_bdevs_list": [ 00:11:26.012 { 00:11:26.012 "name": "pt1", 00:11:26.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.012 "is_configured": true, 00:11:26.012 "data_offset": 2048, 00:11:26.012 "data_size": 63488 00:11:26.012 }, 00:11:26.012 { 00:11:26.012 "name": "pt2", 00:11:26.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.012 "is_configured": true, 00:11:26.012 "data_offset": 2048, 00:11:26.012 "data_size": 63488 00:11:26.012 } 00:11:26.012 ] 00:11:26.012 } 00:11:26.012 } 00:11:26.012 }' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.012 pt2' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:26.012 [2024-11-20 07:08:08.192490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f99c4d35-0e37-40fe-b17a-9e8f70fe8267 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f99c4d35-0e37-40fe-b17a-9e8f70fe8267 ']' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 [2024-11-20 07:08:08.244003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.012 [2024-11-20 07:08:08.244054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.012 [2024-11-20 07:08:08.244189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.012 [2024-11-20 07:08:08.244265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.012 [2024-11-20 07:08:08.244284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:26.012 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 [2024-11-20 07:08:08.367839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.270 [2024-11-20 07:08:08.370228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.271 [2024-11-20 07:08:08.370313] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.271 [2024-11-20 07:08:08.370400] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.271 [2024-11-20 07:08:08.370419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.271 [2024-11-20 07:08:08.370432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:26.271 request: 00:11:26.271 { 00:11:26.271 "name": "raid_bdev1", 00:11:26.271 "raid_level": "raid1", 00:11:26.271 "base_bdevs": [ 00:11:26.271 "malloc1", 00:11:26.271 "malloc2" 00:11:26.271 ], 00:11:26.271 "superblock": false, 00:11:26.271 "method": "bdev_raid_create", 00:11:26.271 "req_id": 1 00:11:26.271 } 00:11:26.271 Got JSON-RPC error response 00:11:26.271 response: 00:11:26.271 { 00:11:26.271 "code": -17, 00:11:26.271 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.271 } 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 [2024-11-20 07:08:08.427714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.271 [2024-11-20 07:08:08.427823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.271 [2024-11-20 07:08:08.427846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:26.271 [2024-11-20 07:08:08.427861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.271 [2024-11-20 07:08:08.430602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.271 [2024-11-20 07:08:08.430645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.271 [2024-11-20 07:08:08.430756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:26.271 [2024-11-20 07:08:08.430832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.271 pt1 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.271 "name": "raid_bdev1", 00:11:26.271 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:26.271 "strip_size_kb": 0, 00:11:26.271 "state": "configuring", 00:11:26.271 "raid_level": "raid1", 00:11:26.271 "superblock": true, 00:11:26.271 "num_base_bdevs": 2, 00:11:26.271 "num_base_bdevs_discovered": 1, 00:11:26.271 "num_base_bdevs_operational": 2, 00:11:26.271 "base_bdevs_list": [ 00:11:26.271 { 00:11:26.271 "name": "pt1", 00:11:26.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.271 "is_configured": true, 00:11:26.271 "data_offset": 2048, 00:11:26.271 "data_size": 63488 00:11:26.271 }, 00:11:26.271 { 00:11:26.271 "name": null, 00:11:26.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.271 "is_configured": false, 00:11:26.271 "data_offset": 2048, 00:11:26.271 "data_size": 63488 00:11:26.271 } 00:11:26.271 ] 00:11:26.271 }' 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.271 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.837 [2024-11-20 07:08:08.922898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.837 [2024-11-20 07:08:08.923114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.837 [2024-11-20 07:08:08.923159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:26.837 [2024-11-20 07:08:08.923248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.837 [2024-11-20 07:08:08.923870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.837 [2024-11-20 07:08:08.923936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.837 [2024-11-20 07:08:08.924069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.837 [2024-11-20 07:08:08.924125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.837 [2024-11-20 07:08:08.924281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.837 [2024-11-20 07:08:08.924322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.837 [2024-11-20 07:08:08.924659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.837 [2024-11-20 07:08:08.924915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.837 [2024-11-20 07:08:08.924961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.837 [2024-11-20 07:08:08.925198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.837 pt2 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.837 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.838 "name": "raid_bdev1", 00:11:26.838 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:26.838 "strip_size_kb": 0, 00:11:26.838 "state": "online", 00:11:26.838 "raid_level": "raid1", 00:11:26.838 "superblock": true, 00:11:26.838 "num_base_bdevs": 2, 00:11:26.838 "num_base_bdevs_discovered": 2, 00:11:26.838 "num_base_bdevs_operational": 2, 00:11:26.838 "base_bdevs_list": [ 00:11:26.838 { 00:11:26.838 "name": "pt1", 00:11:26.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.838 "is_configured": true, 00:11:26.838 "data_offset": 2048, 00:11:26.838 "data_size": 63488 00:11:26.838 }, 00:11:26.838 { 00:11:26.838 "name": "pt2", 00:11:26.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.838 "is_configured": true, 00:11:26.838 "data_offset": 2048, 00:11:26.838 "data_size": 63488 00:11:26.838 } 00:11:26.838 ] 00:11:26.838 }' 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.838 07:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.406 [2024-11-20 07:08:09.378391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.406 "name": "raid_bdev1", 00:11:27.406 "aliases": [ 00:11:27.406 "f99c4d35-0e37-40fe-b17a-9e8f70fe8267" 00:11:27.406 ], 00:11:27.406 "product_name": "Raid Volume", 00:11:27.406 "block_size": 512, 00:11:27.406 "num_blocks": 63488, 00:11:27.406 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:27.406 "assigned_rate_limits": { 00:11:27.406 "rw_ios_per_sec": 0, 00:11:27.406 "rw_mbytes_per_sec": 0, 00:11:27.406 "r_mbytes_per_sec": 0, 00:11:27.406 "w_mbytes_per_sec": 0 00:11:27.406 }, 00:11:27.406 "claimed": false, 00:11:27.406 "zoned": false, 00:11:27.406 "supported_io_types": { 00:11:27.406 "read": true, 00:11:27.406 "write": true, 00:11:27.406 "unmap": false, 00:11:27.406 "flush": false, 00:11:27.406 "reset": true, 00:11:27.406 "nvme_admin": false, 00:11:27.406 "nvme_io": false, 00:11:27.406 "nvme_io_md": false, 00:11:27.406 "write_zeroes": true, 00:11:27.406 "zcopy": false, 00:11:27.406 "get_zone_info": false, 00:11:27.406 "zone_management": false, 00:11:27.406 "zone_append": false, 00:11:27.406 "compare": false, 00:11:27.406 "compare_and_write": false, 00:11:27.406 "abort": false, 00:11:27.406 "seek_hole": false, 00:11:27.406 "seek_data": false, 00:11:27.406 "copy": false, 00:11:27.406 "nvme_iov_md": false 00:11:27.406 }, 00:11:27.406 "memory_domains": [ 00:11:27.406 { 00:11:27.406 "dma_device_id": "system", 00:11:27.406 "dma_device_type": 1 00:11:27.406 }, 00:11:27.406 { 00:11:27.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.406 "dma_device_type": 2 00:11:27.406 }, 00:11:27.406 { 00:11:27.406 "dma_device_id": "system", 00:11:27.406 "dma_device_type": 1 00:11:27.406 }, 00:11:27.406 { 00:11:27.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.406 "dma_device_type": 2 00:11:27.406 } 00:11:27.406 ], 00:11:27.406 "driver_specific": { 00:11:27.406 "raid": { 00:11:27.406 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:27.406 "strip_size_kb": 0, 00:11:27.406 "state": "online", 00:11:27.406 "raid_level": "raid1", 00:11:27.406 "superblock": true, 00:11:27.406 "num_base_bdevs": 2, 00:11:27.406 "num_base_bdevs_discovered": 2, 00:11:27.406 "num_base_bdevs_operational": 2, 00:11:27.406 "base_bdevs_list": [ 00:11:27.406 { 00:11:27.406 "name": "pt1", 00:11:27.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.406 "is_configured": true, 00:11:27.406 "data_offset": 2048, 00:11:27.406 "data_size": 63488 00:11:27.406 }, 00:11:27.406 { 00:11:27.406 "name": "pt2", 00:11:27.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.406 "is_configured": true, 00:11:27.406 "data_offset": 2048, 00:11:27.406 "data_size": 63488 00:11:27.406 } 00:11:27.406 ] 00:11:27.406 } 00:11:27.406 } 00:11:27.406 }' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.406 pt2' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.406 [2024-11-20 07:08:09.617920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f99c4d35-0e37-40fe-b17a-9e8f70fe8267 '!=' f99c4d35-0e37-40fe-b17a-9e8f70fe8267 ']' 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.406 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.407 [2024-11-20 07:08:09.649676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.407 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.718 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.718 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.718 "name": "raid_bdev1", 00:11:27.718 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:27.718 "strip_size_kb": 0, 00:11:27.718 "state": "online", 00:11:27.718 "raid_level": "raid1", 00:11:27.718 "superblock": true, 00:11:27.718 "num_base_bdevs": 2, 00:11:27.718 "num_base_bdevs_discovered": 1, 00:11:27.718 "num_base_bdevs_operational": 1, 00:11:27.718 "base_bdevs_list": [ 00:11:27.718 { 00:11:27.718 "name": null, 00:11:27.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.718 "is_configured": false, 00:11:27.718 "data_offset": 0, 00:11:27.718 "data_size": 63488 00:11:27.718 }, 00:11:27.718 { 00:11:27.718 "name": "pt2", 00:11:27.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.718 "is_configured": true, 00:11:27.718 "data_offset": 2048, 00:11:27.718 "data_size": 63488 00:11:27.718 } 00:11:27.718 ] 00:11:27.718 }' 00:11:27.718 07:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.718 07:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 [2024-11-20 07:08:10.132818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.978 [2024-11-20 07:08:10.132945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.978 [2024-11-20 07:08:10.133082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.978 [2024-11-20 07:08:10.133169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.978 [2024-11-20 07:08:10.133225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:27.978 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.979 [2024-11-20 07:08:10.208656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.979 [2024-11-20 07:08:10.208766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.979 [2024-11-20 07:08:10.208789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.979 [2024-11-20 07:08:10.208801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.979 [2024-11-20 07:08:10.211436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.979 [2024-11-20 07:08:10.211477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.979 [2024-11-20 07:08:10.211599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.979 [2024-11-20 07:08:10.211657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.979 [2024-11-20 07:08:10.211769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:27.979 [2024-11-20 07:08:10.211783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.979 [2024-11-20 07:08:10.212028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.979 [2024-11-20 07:08:10.212196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:27.979 [2024-11-20 07:08:10.212206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:27.979 [2024-11-20 07:08:10.212394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.979 pt2 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.979 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.239 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.239 "name": "raid_bdev1", 00:11:28.239 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:28.239 "strip_size_kb": 0, 00:11:28.239 "state": "online", 00:11:28.239 "raid_level": "raid1", 00:11:28.239 "superblock": true, 00:11:28.239 "num_base_bdevs": 2, 00:11:28.239 "num_base_bdevs_discovered": 1, 00:11:28.239 "num_base_bdevs_operational": 1, 00:11:28.239 "base_bdevs_list": [ 00:11:28.239 { 00:11:28.239 "name": null, 00:11:28.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.239 "is_configured": false, 00:11:28.239 "data_offset": 2048, 00:11:28.239 "data_size": 63488 00:11:28.239 }, 00:11:28.239 { 00:11:28.239 "name": "pt2", 00:11:28.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.239 "is_configured": true, 00:11:28.239 "data_offset": 2048, 00:11:28.239 "data_size": 63488 00:11:28.239 } 00:11:28.239 ] 00:11:28.239 }' 00:11:28.239 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.239 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-20 07:08:10.671866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.499 [2024-11-20 07:08:10.672000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.499 [2024-11-20 07:08:10.672131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.499 [2024-11-20 07:08:10.672216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.499 [2024-11-20 07:08:10.672259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-20 07:08:10.731823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.499 [2024-11-20 07:08:10.732001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.499 [2024-11-20 07:08:10.732053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:28.499 [2024-11-20 07:08:10.732083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.499 [2024-11-20 07:08:10.734934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.499 [2024-11-20 07:08:10.735022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.499 [2024-11-20 07:08:10.735174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.499 [2024-11-20 07:08:10.735269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.499 [2024-11-20 07:08:10.735466] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:28.499 [2024-11-20 07:08:10.735521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.499 [2024-11-20 07:08:10.735565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:28.499 [2024-11-20 07:08:10.735678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.499 [2024-11-20 07:08:10.735805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:28.499 [2024-11-20 07:08:10.735843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.499 [2024-11-20 07:08:10.736134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:28.499 [2024-11-20 07:08:10.736316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:28.499 [2024-11-20 07:08:10.736377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:28.499 [2024-11-20 07:08:10.736634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.499 pt1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.758 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.758 "name": "raid_bdev1", 00:11:28.758 "uuid": "f99c4d35-0e37-40fe-b17a-9e8f70fe8267", 00:11:28.758 "strip_size_kb": 0, 00:11:28.758 "state": "online", 00:11:28.758 "raid_level": "raid1", 00:11:28.758 "superblock": true, 00:11:28.758 "num_base_bdevs": 2, 00:11:28.758 "num_base_bdevs_discovered": 1, 00:11:28.758 "num_base_bdevs_operational": 1, 00:11:28.758 "base_bdevs_list": [ 00:11:28.758 { 00:11:28.758 "name": null, 00:11:28.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.758 "is_configured": false, 00:11:28.758 "data_offset": 2048, 00:11:28.758 "data_size": 63488 00:11:28.758 }, 00:11:28.758 { 00:11:28.758 "name": "pt2", 00:11:28.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.758 "is_configured": true, 00:11:28.758 "data_offset": 2048, 00:11:28.758 "data_size": 63488 00:11:28.758 } 00:11:28.758 ] 00:11:28.758 }' 00:11:28.758 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.758 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:29.017 [2024-11-20 07:08:11.207439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f99c4d35-0e37-40fe-b17a-9e8f70fe8267 '!=' f99c4d35-0e37-40fe-b17a-9e8f70fe8267 ']' 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63501 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63501 ']' 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63501 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.017 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63501 00:11:29.275 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.275 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.275 killing process with pid 63501 00:11:29.275 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63501' 00:11:29.275 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63501 00:11:29.275 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63501 00:11:29.275 [2024-11-20 07:08:11.300848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.275 [2024-11-20 07:08:11.300993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.275 [2024-11-20 07:08:11.301060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.275 [2024-11-20 07:08:11.301078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:29.533 [2024-11-20 07:08:11.550068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.910 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:30.910 00:11:30.910 real 0m6.403s 00:11:30.910 user 0m9.531s 00:11:30.910 sys 0m1.106s 00:11:30.910 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.910 ************************************ 00:11:30.910 END TEST raid_superblock_test 00:11:30.910 ************************************ 00:11:30.910 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.910 07:08:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:30.910 07:08:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:30.910 07:08:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.910 07:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.910 ************************************ 00:11:30.910 START TEST raid_read_error_test 00:11:30.910 ************************************ 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:30.910 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GFrVzICm6l 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63831 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63831 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63831 ']' 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.911 07:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.911 [2024-11-20 07:08:13.008575] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:30.911 [2024-11-20 07:08:13.008714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63831 ] 00:11:31.169 [2024-11-20 07:08:13.191921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.169 [2024-11-20 07:08:13.344092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.428 [2024-11-20 07:08:13.601863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.428 [2024-11-20 07:08:13.601938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.688 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 BaseBdev1_malloc 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 true 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 [2024-11-20 07:08:13.988651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:31.947 [2024-11-20 07:08:13.988814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.947 [2024-11-20 07:08:13.988844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:31.947 [2024-11-20 07:08:13.988858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.947 [2024-11-20 07:08:13.991700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.947 [2024-11-20 07:08:13.991747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.947 BaseBdev1 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 BaseBdev2_malloc 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 true 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 [2024-11-20 07:08:14.068258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.947 [2024-11-20 07:08:14.068374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.947 [2024-11-20 07:08:14.068402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:31.947 [2024-11-20 07:08:14.068417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.947 [2024-11-20 07:08:14.071387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.947 [2024-11-20 07:08:14.071439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.947 BaseBdev2 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 [2024-11-20 07:08:14.080400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.947 [2024-11-20 07:08:14.082990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.947 [2024-11-20 07:08:14.083393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:31.947 [2024-11-20 07:08:14.083417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.947 [2024-11-20 07:08:14.083766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:31.947 [2024-11-20 07:08:14.083997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:31.947 [2024-11-20 07:08:14.084010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:31.947 [2024-11-20 07:08:14.084299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.947 "name": "raid_bdev1", 00:11:31.947 "uuid": "b88fe919-7756-4b0a-bb1e-9f115a3ba552", 00:11:31.947 "strip_size_kb": 0, 00:11:31.947 "state": "online", 00:11:31.947 "raid_level": "raid1", 00:11:31.947 "superblock": true, 00:11:31.947 "num_base_bdevs": 2, 00:11:31.947 "num_base_bdevs_discovered": 2, 00:11:31.947 "num_base_bdevs_operational": 2, 00:11:31.947 "base_bdevs_list": [ 00:11:31.947 { 00:11:31.947 "name": "BaseBdev1", 00:11:31.947 "uuid": "b0a7e2b3-7179-5593-9ac8-47fa6de43146", 00:11:31.947 "is_configured": true, 00:11:31.947 "data_offset": 2048, 00:11:31.947 "data_size": 63488 00:11:31.947 }, 00:11:31.947 { 00:11:31.947 "name": "BaseBdev2", 00:11:31.947 "uuid": "3275e272-1a18-50d2-9bf8-7097b456304b", 00:11:31.947 "is_configured": true, 00:11:31.947 "data_offset": 2048, 00:11:31.947 "data_size": 63488 00:11:31.947 } 00:11:31.947 ] 00:11:31.947 }' 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.947 07:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.515 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.515 07:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.515 [2024-11-20 07:08:14.629225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.476 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.476 "name": "raid_bdev1", 00:11:33.476 "uuid": "b88fe919-7756-4b0a-bb1e-9f115a3ba552", 00:11:33.476 "strip_size_kb": 0, 00:11:33.476 "state": "online", 00:11:33.476 "raid_level": "raid1", 00:11:33.476 "superblock": true, 00:11:33.476 "num_base_bdevs": 2, 00:11:33.477 "num_base_bdevs_discovered": 2, 00:11:33.477 "num_base_bdevs_operational": 2, 00:11:33.477 "base_bdevs_list": [ 00:11:33.477 { 00:11:33.477 "name": "BaseBdev1", 00:11:33.477 "uuid": "b0a7e2b3-7179-5593-9ac8-47fa6de43146", 00:11:33.477 "is_configured": true, 00:11:33.477 "data_offset": 2048, 00:11:33.477 "data_size": 63488 00:11:33.477 }, 00:11:33.477 { 00:11:33.477 "name": "BaseBdev2", 00:11:33.477 "uuid": "3275e272-1a18-50d2-9bf8-7097b456304b", 00:11:33.477 "is_configured": true, 00:11:33.477 "data_offset": 2048, 00:11:33.477 "data_size": 63488 00:11:33.477 } 00:11:33.477 ] 00:11:33.477 }' 00:11:33.477 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.477 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.735 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.735 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.735 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.735 [2024-11-20 07:08:15.947660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.735 [2024-11-20 07:08:15.947722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.735 [2024-11-20 07:08:15.951052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.735 [2024-11-20 07:08:15.951114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.735 [2024-11-20 07:08:15.951219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.735 [2024-11-20 07:08:15.951235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:33.735 { 00:11:33.735 "results": [ 00:11:33.735 { 00:11:33.735 "job": "raid_bdev1", 00:11:33.735 "core_mask": "0x1", 00:11:33.735 "workload": "randrw", 00:11:33.735 "percentage": 50, 00:11:33.735 "status": "finished", 00:11:33.735 "queue_depth": 1, 00:11:33.735 "io_size": 131072, 00:11:33.735 "runtime": 1.318538, 00:11:33.735 "iops": 11256.406717136708, 00:11:33.735 "mibps": 1407.0508396420885, 00:11:33.735 "io_failed": 0, 00:11:33.735 "io_timeout": 0, 00:11:33.735 "avg_latency_us": 85.6535626208876, 00:11:33.735 "min_latency_us": 26.829694323144103, 00:11:33.735 "max_latency_us": 1695.6366812227075 00:11:33.735 } 00:11:33.735 ], 00:11:33.735 "core_count": 1 00:11:33.735 } 00:11:33.735 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.735 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63831 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63831 ']' 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63831 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63831 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63831' 00:11:33.736 killing process with pid 63831 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63831 00:11:33.736 [2024-11-20 07:08:15.994808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.736 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63831 00:11:33.995 [2024-11-20 07:08:16.190355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GFrVzICm6l 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:35.375 00:11:35.375 real 0m4.602s 00:11:35.375 user 0m5.412s 00:11:35.375 sys 0m0.627s 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.375 07:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.375 ************************************ 00:11:35.375 END TEST raid_read_error_test 00:11:35.375 ************************************ 00:11:35.375 07:08:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:35.375 07:08:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.375 07:08:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.375 07:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.375 ************************************ 00:11:35.375 START TEST raid_write_error_test 00:11:35.375 ************************************ 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HOtCju2Iut 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63971 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63971 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63971 ']' 00:11:35.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.375 07:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.635 [2024-11-20 07:08:17.675436] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:35.635 [2024-11-20 07:08:17.675562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63971 ] 00:11:35.635 [2024-11-20 07:08:17.851527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.895 [2024-11-20 07:08:17.977166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.154 [2024-11-20 07:08:18.197059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.154 [2024-11-20 07:08:18.197097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev1_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 true 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 07:08:18.596268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.414 [2024-11-20 07:08:18.596329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-11-20 07:08:18.596369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.414 [2024-11-20 07:08:18.596389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.414 [2024-11-20 07:08:18.598766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.414 [2024-11-20 07:08:18.598809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.414 BaseBdev1 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev2_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 true 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 07:08:18.660312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.414 [2024-11-20 07:08:18.660390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-11-20 07:08:18.660428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.415 [2024-11-20 07:08:18.660441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.415 [2024-11-20 07:08:18.663003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.415 [2024-11-20 07:08:18.663048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.415 BaseBdev2 00:11:36.415 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:36.415 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 [2024-11-20 07:08:18.672400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.415 [2024-11-20 07:08:18.674614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.415 [2024-11-20 07:08:18.674849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.415 [2024-11-20 07:08:18.674867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.415 [2024-11-20 07:08:18.675165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:36.415 [2024-11-20 07:08:18.675408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.415 [2024-11-20 07:08:18.675424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:36.415 [2024-11-20 07:08:18.675637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.674 "name": "raid_bdev1", 00:11:36.674 "uuid": "f84b8c26-28c7-4c3a-b69c-4fcaf216d2cf", 00:11:36.674 "strip_size_kb": 0, 00:11:36.674 "state": "online", 00:11:36.674 "raid_level": "raid1", 00:11:36.674 "superblock": true, 00:11:36.674 "num_base_bdevs": 2, 00:11:36.674 "num_base_bdevs_discovered": 2, 00:11:36.674 "num_base_bdevs_operational": 2, 00:11:36.674 "base_bdevs_list": [ 00:11:36.674 { 00:11:36.674 "name": "BaseBdev1", 00:11:36.674 "uuid": "8d428ca0-7eea-5ff6-9116-eb6d07e4eec7", 00:11:36.674 "is_configured": true, 00:11:36.674 "data_offset": 2048, 00:11:36.674 "data_size": 63488 00:11:36.674 }, 00:11:36.674 { 00:11:36.674 "name": "BaseBdev2", 00:11:36.674 "uuid": "8118360f-d0c7-537b-a0b9-4bc86e8b40d7", 00:11:36.674 "is_configured": true, 00:11:36.674 "data_offset": 2048, 00:11:36.674 "data_size": 63488 00:11:36.674 } 00:11:36.674 ] 00:11:36.674 }' 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.674 07:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.933 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:36.933 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.192 [2024-11-20 07:08:19.288966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:38.128 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:38.128 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.128 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 [2024-11-20 07:08:20.194101] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:38.128 [2024-11-20 07:08:20.194252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.128 [2024-11-20 07:08:20.194492] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:38.128 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.129 "name": "raid_bdev1", 00:11:38.129 "uuid": "f84b8c26-28c7-4c3a-b69c-4fcaf216d2cf", 00:11:38.129 "strip_size_kb": 0, 00:11:38.129 "state": "online", 00:11:38.129 "raid_level": "raid1", 00:11:38.129 "superblock": true, 00:11:38.129 "num_base_bdevs": 2, 00:11:38.129 "num_base_bdevs_discovered": 1, 00:11:38.129 "num_base_bdevs_operational": 1, 00:11:38.129 "base_bdevs_list": [ 00:11:38.129 { 00:11:38.129 "name": null, 00:11:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.129 "is_configured": false, 00:11:38.129 "data_offset": 0, 00:11:38.129 "data_size": 63488 00:11:38.129 }, 00:11:38.129 { 00:11:38.129 "name": "BaseBdev2", 00:11:38.129 "uuid": "8118360f-d0c7-537b-a0b9-4bc86e8b40d7", 00:11:38.129 "is_configured": true, 00:11:38.129 "data_offset": 2048, 00:11:38.129 "data_size": 63488 00:11:38.129 } 00:11:38.129 ] 00:11:38.129 }' 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.129 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.718 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.718 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.718 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.718 [2024-11-20 07:08:20.692503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.718 [2024-11-20 07:08:20.692612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.718 [2024-11-20 07:08:20.695885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.718 [2024-11-20 07:08:20.695972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.718 [2024-11-20 07:08:20.696064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.718 [2024-11-20 07:08:20.696129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:38.718 { 00:11:38.718 "results": [ 00:11:38.718 { 00:11:38.718 "job": "raid_bdev1", 00:11:38.718 "core_mask": "0x1", 00:11:38.718 "workload": "randrw", 00:11:38.718 "percentage": 50, 00:11:38.718 "status": "finished", 00:11:38.718 "queue_depth": 1, 00:11:38.718 "io_size": 131072, 00:11:38.718 "runtime": 1.404128, 00:11:38.718 "iops": 19031.02851022129, 00:11:38.718 "mibps": 2378.8785637776614, 00:11:38.718 "io_failed": 0, 00:11:38.718 "io_timeout": 0, 00:11:38.718 "avg_latency_us": 49.60262355176328, 00:11:38.718 "min_latency_us": 23.699563318777294, 00:11:38.719 "max_latency_us": 1588.317903930131 00:11:38.719 } 00:11:38.719 ], 00:11:38.719 "core_count": 1 00:11:38.719 } 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63971 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63971 ']' 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63971 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63971 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.719 killing process with pid 63971 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63971' 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63971 00:11:38.719 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63971 00:11:38.719 [2024-11-20 07:08:20.744202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.719 [2024-11-20 07:08:20.886879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HOtCju2Iut 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.097 ************************************ 00:11:40.097 END TEST raid_write_error_test 00:11:40.097 ************************************ 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.097 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:40.098 00:11:40.098 real 0m4.583s 00:11:40.098 user 0m5.576s 00:11:40.098 sys 0m0.583s 00:11:40.098 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.098 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.098 07:08:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:40.098 07:08:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.098 07:08:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:40.098 07:08:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.098 07:08:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.098 07:08:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.098 ************************************ 00:11:40.098 START TEST raid_state_function_test 00:11:40.098 ************************************ 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:40.098 Process raid pid: 64115 00:11:40.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64115 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64115' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64115 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64115 ']' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.098 07:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.098 [2024-11-20 07:08:22.304536] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:40.098 [2024-11-20 07:08:22.304656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.358 [2024-11-20 07:08:22.480275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.358 [2024-11-20 07:08:22.607652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.618 [2024-11-20 07:08:22.826571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.618 [2024-11-20 07:08:22.826624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.186 [2024-11-20 07:08:23.168509] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.186 [2024-11-20 07:08:23.168569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.186 [2024-11-20 07:08:23.168579] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.186 [2024-11-20 07:08:23.168605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.186 [2024-11-20 07:08:23.168612] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.186 [2024-11-20 07:08:23.168621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.186 "name": "Existed_Raid", 00:11:41.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.186 "strip_size_kb": 64, 00:11:41.186 "state": "configuring", 00:11:41.186 "raid_level": "raid0", 00:11:41.186 "superblock": false, 00:11:41.186 "num_base_bdevs": 3, 00:11:41.186 "num_base_bdevs_discovered": 0, 00:11:41.186 "num_base_bdevs_operational": 3, 00:11:41.186 "base_bdevs_list": [ 00:11:41.186 { 00:11:41.186 "name": "BaseBdev1", 00:11:41.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.186 "is_configured": false, 00:11:41.186 "data_offset": 0, 00:11:41.186 "data_size": 0 00:11:41.186 }, 00:11:41.186 { 00:11:41.186 "name": "BaseBdev2", 00:11:41.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.186 "is_configured": false, 00:11:41.186 "data_offset": 0, 00:11:41.186 "data_size": 0 00:11:41.186 }, 00:11:41.186 { 00:11:41.186 "name": "BaseBdev3", 00:11:41.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.186 "is_configured": false, 00:11:41.186 "data_offset": 0, 00:11:41.186 "data_size": 0 00:11:41.186 } 00:11:41.186 ] 00:11:41.186 }' 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.186 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.445 [2024-11-20 07:08:23.659600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.445 [2024-11-20 07:08:23.659646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.445 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.445 [2024-11-20 07:08:23.667591] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.445 [2024-11-20 07:08:23.667641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.446 [2024-11-20 07:08:23.667652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.446 [2024-11-20 07:08:23.667663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.446 [2024-11-20 07:08:23.667670] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.446 [2024-11-20 07:08:23.667681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.446 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.446 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.446 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.446 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.705 [2024-11-20 07:08:23.715201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.705 BaseBdev1 00:11:41.705 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.705 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:41.705 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.705 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.706 [ 00:11:41.706 { 00:11:41.706 "name": "BaseBdev1", 00:11:41.706 "aliases": [ 00:11:41.706 "8ee7e782-656c-4244-abca-31d5f75034fe" 00:11:41.706 ], 00:11:41.706 "product_name": "Malloc disk", 00:11:41.706 "block_size": 512, 00:11:41.706 "num_blocks": 65536, 00:11:41.706 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:41.706 "assigned_rate_limits": { 00:11:41.706 "rw_ios_per_sec": 0, 00:11:41.706 "rw_mbytes_per_sec": 0, 00:11:41.706 "r_mbytes_per_sec": 0, 00:11:41.706 "w_mbytes_per_sec": 0 00:11:41.706 }, 00:11:41.706 "claimed": true, 00:11:41.706 "claim_type": "exclusive_write", 00:11:41.706 "zoned": false, 00:11:41.706 "supported_io_types": { 00:11:41.706 "read": true, 00:11:41.706 "write": true, 00:11:41.706 "unmap": true, 00:11:41.706 "flush": true, 00:11:41.706 "reset": true, 00:11:41.706 "nvme_admin": false, 00:11:41.706 "nvme_io": false, 00:11:41.706 "nvme_io_md": false, 00:11:41.706 "write_zeroes": true, 00:11:41.706 "zcopy": true, 00:11:41.706 "get_zone_info": false, 00:11:41.706 "zone_management": false, 00:11:41.706 "zone_append": false, 00:11:41.706 "compare": false, 00:11:41.706 "compare_and_write": false, 00:11:41.706 "abort": true, 00:11:41.706 "seek_hole": false, 00:11:41.706 "seek_data": false, 00:11:41.706 "copy": true, 00:11:41.706 "nvme_iov_md": false 00:11:41.706 }, 00:11:41.706 "memory_domains": [ 00:11:41.706 { 00:11:41.706 "dma_device_id": "system", 00:11:41.706 "dma_device_type": 1 00:11:41.706 }, 00:11:41.706 { 00:11:41.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.706 "dma_device_type": 2 00:11:41.706 } 00:11:41.706 ], 00:11:41.706 "driver_specific": {} 00:11:41.706 } 00:11:41.706 ] 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.706 "name": "Existed_Raid", 00:11:41.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.706 "strip_size_kb": 64, 00:11:41.706 "state": "configuring", 00:11:41.706 "raid_level": "raid0", 00:11:41.706 "superblock": false, 00:11:41.706 "num_base_bdevs": 3, 00:11:41.706 "num_base_bdevs_discovered": 1, 00:11:41.706 "num_base_bdevs_operational": 3, 00:11:41.706 "base_bdevs_list": [ 00:11:41.706 { 00:11:41.706 "name": "BaseBdev1", 00:11:41.706 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:41.706 "is_configured": true, 00:11:41.706 "data_offset": 0, 00:11:41.706 "data_size": 65536 00:11:41.706 }, 00:11:41.706 { 00:11:41.706 "name": "BaseBdev2", 00:11:41.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.706 "is_configured": false, 00:11:41.706 "data_offset": 0, 00:11:41.706 "data_size": 0 00:11:41.706 }, 00:11:41.706 { 00:11:41.706 "name": "BaseBdev3", 00:11:41.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.706 "is_configured": false, 00:11:41.706 "data_offset": 0, 00:11:41.706 "data_size": 0 00:11:41.706 } 00:11:41.706 ] 00:11:41.706 }' 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.706 07:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.965 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.965 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.965 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.965 [2024-11-20 07:08:24.226452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.965 [2024-11-20 07:08:24.226517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.224 [2024-11-20 07:08:24.238510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.224 [2024-11-20 07:08:24.240450] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.224 [2024-11-20 07:08:24.240497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.224 [2024-11-20 07:08:24.240509] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.224 [2024-11-20 07:08:24.240519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.224 "name": "Existed_Raid", 00:11:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.224 "strip_size_kb": 64, 00:11:42.224 "state": "configuring", 00:11:42.224 "raid_level": "raid0", 00:11:42.224 "superblock": false, 00:11:42.224 "num_base_bdevs": 3, 00:11:42.224 "num_base_bdevs_discovered": 1, 00:11:42.224 "num_base_bdevs_operational": 3, 00:11:42.224 "base_bdevs_list": [ 00:11:42.224 { 00:11:42.224 "name": "BaseBdev1", 00:11:42.224 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:42.224 "is_configured": true, 00:11:42.224 "data_offset": 0, 00:11:42.224 "data_size": 65536 00:11:42.224 }, 00:11:42.224 { 00:11:42.224 "name": "BaseBdev2", 00:11:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.224 "is_configured": false, 00:11:42.224 "data_offset": 0, 00:11:42.224 "data_size": 0 00:11:42.224 }, 00:11:42.224 { 00:11:42.224 "name": "BaseBdev3", 00:11:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.224 "is_configured": false, 00:11:42.224 "data_offset": 0, 00:11:42.224 "data_size": 0 00:11:42.224 } 00:11:42.224 ] 00:11:42.224 }' 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.224 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.484 [2024-11-20 07:08:24.703576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.484 BaseBdev2 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.484 [ 00:11:42.484 { 00:11:42.484 "name": "BaseBdev2", 00:11:42.484 "aliases": [ 00:11:42.484 "a189538e-c58e-41bf-bf7c-614177a4f87a" 00:11:42.484 ], 00:11:42.484 "product_name": "Malloc disk", 00:11:42.484 "block_size": 512, 00:11:42.484 "num_blocks": 65536, 00:11:42.484 "uuid": "a189538e-c58e-41bf-bf7c-614177a4f87a", 00:11:42.484 "assigned_rate_limits": { 00:11:42.484 "rw_ios_per_sec": 0, 00:11:42.484 "rw_mbytes_per_sec": 0, 00:11:42.484 "r_mbytes_per_sec": 0, 00:11:42.484 "w_mbytes_per_sec": 0 00:11:42.484 }, 00:11:42.484 "claimed": true, 00:11:42.484 "claim_type": "exclusive_write", 00:11:42.484 "zoned": false, 00:11:42.484 "supported_io_types": { 00:11:42.484 "read": true, 00:11:42.484 "write": true, 00:11:42.484 "unmap": true, 00:11:42.484 "flush": true, 00:11:42.484 "reset": true, 00:11:42.484 "nvme_admin": false, 00:11:42.484 "nvme_io": false, 00:11:42.484 "nvme_io_md": false, 00:11:42.484 "write_zeroes": true, 00:11:42.484 "zcopy": true, 00:11:42.484 "get_zone_info": false, 00:11:42.484 "zone_management": false, 00:11:42.484 "zone_append": false, 00:11:42.484 "compare": false, 00:11:42.484 "compare_and_write": false, 00:11:42.484 "abort": true, 00:11:42.484 "seek_hole": false, 00:11:42.484 "seek_data": false, 00:11:42.484 "copy": true, 00:11:42.484 "nvme_iov_md": false 00:11:42.484 }, 00:11:42.484 "memory_domains": [ 00:11:42.484 { 00:11:42.484 "dma_device_id": "system", 00:11:42.484 "dma_device_type": 1 00:11:42.484 }, 00:11:42.484 { 00:11:42.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.484 "dma_device_type": 2 00:11:42.484 } 00:11:42.484 ], 00:11:42.484 "driver_specific": {} 00:11:42.484 } 00:11:42.484 ] 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.484 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.743 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.743 "name": "Existed_Raid", 00:11:42.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.743 "strip_size_kb": 64, 00:11:42.743 "state": "configuring", 00:11:42.743 "raid_level": "raid0", 00:11:42.743 "superblock": false, 00:11:42.743 "num_base_bdevs": 3, 00:11:42.743 "num_base_bdevs_discovered": 2, 00:11:42.743 "num_base_bdevs_operational": 3, 00:11:42.743 "base_bdevs_list": [ 00:11:42.743 { 00:11:42.743 "name": "BaseBdev1", 00:11:42.743 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:42.743 "is_configured": true, 00:11:42.743 "data_offset": 0, 00:11:42.743 "data_size": 65536 00:11:42.743 }, 00:11:42.743 { 00:11:42.743 "name": "BaseBdev2", 00:11:42.743 "uuid": "a189538e-c58e-41bf-bf7c-614177a4f87a", 00:11:42.743 "is_configured": true, 00:11:42.743 "data_offset": 0, 00:11:42.743 "data_size": 65536 00:11:42.744 }, 00:11:42.744 { 00:11:42.744 "name": "BaseBdev3", 00:11:42.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.744 "is_configured": false, 00:11:42.744 "data_offset": 0, 00:11:42.744 "data_size": 0 00:11:42.744 } 00:11:42.744 ] 00:11:42.744 }' 00:11:42.744 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.744 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.003 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.003 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.003 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.262 [2024-11-20 07:08:25.284649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.262 [2024-11-20 07:08:25.284701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:43.262 [2024-11-20 07:08:25.284715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:43.262 [2024-11-20 07:08:25.284998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:43.262 [2024-11-20 07:08:25.285168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:43.262 [2024-11-20 07:08:25.285177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:43.262 [2024-11-20 07:08:25.285528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.262 BaseBdev3 00:11:43.262 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.263 [ 00:11:43.263 { 00:11:43.263 "name": "BaseBdev3", 00:11:43.263 "aliases": [ 00:11:43.263 "6157063e-d211-4dca-8c98-23754306d728" 00:11:43.263 ], 00:11:43.263 "product_name": "Malloc disk", 00:11:43.263 "block_size": 512, 00:11:43.263 "num_blocks": 65536, 00:11:43.263 "uuid": "6157063e-d211-4dca-8c98-23754306d728", 00:11:43.263 "assigned_rate_limits": { 00:11:43.263 "rw_ios_per_sec": 0, 00:11:43.263 "rw_mbytes_per_sec": 0, 00:11:43.263 "r_mbytes_per_sec": 0, 00:11:43.263 "w_mbytes_per_sec": 0 00:11:43.263 }, 00:11:43.263 "claimed": true, 00:11:43.263 "claim_type": "exclusive_write", 00:11:43.263 "zoned": false, 00:11:43.263 "supported_io_types": { 00:11:43.263 "read": true, 00:11:43.263 "write": true, 00:11:43.263 "unmap": true, 00:11:43.263 "flush": true, 00:11:43.263 "reset": true, 00:11:43.263 "nvme_admin": false, 00:11:43.263 "nvme_io": false, 00:11:43.263 "nvme_io_md": false, 00:11:43.263 "write_zeroes": true, 00:11:43.263 "zcopy": true, 00:11:43.263 "get_zone_info": false, 00:11:43.263 "zone_management": false, 00:11:43.263 "zone_append": false, 00:11:43.263 "compare": false, 00:11:43.263 "compare_and_write": false, 00:11:43.263 "abort": true, 00:11:43.263 "seek_hole": false, 00:11:43.263 "seek_data": false, 00:11:43.263 "copy": true, 00:11:43.263 "nvme_iov_md": false 00:11:43.263 }, 00:11:43.263 "memory_domains": [ 00:11:43.263 { 00:11:43.263 "dma_device_id": "system", 00:11:43.263 "dma_device_type": 1 00:11:43.263 }, 00:11:43.263 { 00:11:43.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.263 "dma_device_type": 2 00:11:43.263 } 00:11:43.263 ], 00:11:43.263 "driver_specific": {} 00:11:43.263 } 00:11:43.263 ] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.263 "name": "Existed_Raid", 00:11:43.263 "uuid": "584e35fa-b30d-473f-8984-18d1d7a67234", 00:11:43.263 "strip_size_kb": 64, 00:11:43.263 "state": "online", 00:11:43.263 "raid_level": "raid0", 00:11:43.263 "superblock": false, 00:11:43.263 "num_base_bdevs": 3, 00:11:43.263 "num_base_bdevs_discovered": 3, 00:11:43.263 "num_base_bdevs_operational": 3, 00:11:43.263 "base_bdevs_list": [ 00:11:43.263 { 00:11:43.263 "name": "BaseBdev1", 00:11:43.263 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:43.263 "is_configured": true, 00:11:43.263 "data_offset": 0, 00:11:43.263 "data_size": 65536 00:11:43.263 }, 00:11:43.263 { 00:11:43.263 "name": "BaseBdev2", 00:11:43.263 "uuid": "a189538e-c58e-41bf-bf7c-614177a4f87a", 00:11:43.263 "is_configured": true, 00:11:43.263 "data_offset": 0, 00:11:43.263 "data_size": 65536 00:11:43.263 }, 00:11:43.263 { 00:11:43.263 "name": "BaseBdev3", 00:11:43.263 "uuid": "6157063e-d211-4dca-8c98-23754306d728", 00:11:43.263 "is_configured": true, 00:11:43.263 "data_offset": 0, 00:11:43.263 "data_size": 65536 00:11:43.263 } 00:11:43.263 ] 00:11:43.263 }' 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.263 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.522 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.782 [2024-11-20 07:08:25.796221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.782 "name": "Existed_Raid", 00:11:43.782 "aliases": [ 00:11:43.782 "584e35fa-b30d-473f-8984-18d1d7a67234" 00:11:43.782 ], 00:11:43.782 "product_name": "Raid Volume", 00:11:43.782 "block_size": 512, 00:11:43.782 "num_blocks": 196608, 00:11:43.782 "uuid": "584e35fa-b30d-473f-8984-18d1d7a67234", 00:11:43.782 "assigned_rate_limits": { 00:11:43.782 "rw_ios_per_sec": 0, 00:11:43.782 "rw_mbytes_per_sec": 0, 00:11:43.782 "r_mbytes_per_sec": 0, 00:11:43.782 "w_mbytes_per_sec": 0 00:11:43.782 }, 00:11:43.782 "claimed": false, 00:11:43.782 "zoned": false, 00:11:43.782 "supported_io_types": { 00:11:43.782 "read": true, 00:11:43.782 "write": true, 00:11:43.782 "unmap": true, 00:11:43.782 "flush": true, 00:11:43.782 "reset": true, 00:11:43.782 "nvme_admin": false, 00:11:43.782 "nvme_io": false, 00:11:43.782 "nvme_io_md": false, 00:11:43.782 "write_zeroes": true, 00:11:43.782 "zcopy": false, 00:11:43.782 "get_zone_info": false, 00:11:43.782 "zone_management": false, 00:11:43.782 "zone_append": false, 00:11:43.782 "compare": false, 00:11:43.782 "compare_and_write": false, 00:11:43.782 "abort": false, 00:11:43.782 "seek_hole": false, 00:11:43.782 "seek_data": false, 00:11:43.782 "copy": false, 00:11:43.782 "nvme_iov_md": false 00:11:43.782 }, 00:11:43.782 "memory_domains": [ 00:11:43.782 { 00:11:43.782 "dma_device_id": "system", 00:11:43.782 "dma_device_type": 1 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.782 "dma_device_type": 2 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "dma_device_id": "system", 00:11:43.782 "dma_device_type": 1 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.782 "dma_device_type": 2 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "dma_device_id": "system", 00:11:43.782 "dma_device_type": 1 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.782 "dma_device_type": 2 00:11:43.782 } 00:11:43.782 ], 00:11:43.782 "driver_specific": { 00:11:43.782 "raid": { 00:11:43.782 "uuid": "584e35fa-b30d-473f-8984-18d1d7a67234", 00:11:43.782 "strip_size_kb": 64, 00:11:43.782 "state": "online", 00:11:43.782 "raid_level": "raid0", 00:11:43.782 "superblock": false, 00:11:43.782 "num_base_bdevs": 3, 00:11:43.782 "num_base_bdevs_discovered": 3, 00:11:43.782 "num_base_bdevs_operational": 3, 00:11:43.782 "base_bdevs_list": [ 00:11:43.782 { 00:11:43.782 "name": "BaseBdev1", 00:11:43.782 "uuid": "8ee7e782-656c-4244-abca-31d5f75034fe", 00:11:43.782 "is_configured": true, 00:11:43.782 "data_offset": 0, 00:11:43.782 "data_size": 65536 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "name": "BaseBdev2", 00:11:43.782 "uuid": "a189538e-c58e-41bf-bf7c-614177a4f87a", 00:11:43.782 "is_configured": true, 00:11:43.782 "data_offset": 0, 00:11:43.782 "data_size": 65536 00:11:43.782 }, 00:11:43.782 { 00:11:43.782 "name": "BaseBdev3", 00:11:43.782 "uuid": "6157063e-d211-4dca-8c98-23754306d728", 00:11:43.782 "is_configured": true, 00:11:43.782 "data_offset": 0, 00:11:43.782 "data_size": 65536 00:11:43.782 } 00:11:43.782 ] 00:11:43.782 } 00:11:43.782 } 00:11:43.782 }' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.782 BaseBdev2 00:11:43.782 BaseBdev3' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.782 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.782 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.042 [2024-11-20 07:08:26.079513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.042 [2024-11-20 07:08:26.079544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.042 [2024-11-20 07:08:26.079600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.042 "name": "Existed_Raid", 00:11:44.042 "uuid": "584e35fa-b30d-473f-8984-18d1d7a67234", 00:11:44.042 "strip_size_kb": 64, 00:11:44.042 "state": "offline", 00:11:44.042 "raid_level": "raid0", 00:11:44.042 "superblock": false, 00:11:44.042 "num_base_bdevs": 3, 00:11:44.042 "num_base_bdevs_discovered": 2, 00:11:44.042 "num_base_bdevs_operational": 2, 00:11:44.042 "base_bdevs_list": [ 00:11:44.042 { 00:11:44.042 "name": null, 00:11:44.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.042 "is_configured": false, 00:11:44.042 "data_offset": 0, 00:11:44.042 "data_size": 65536 00:11:44.042 }, 00:11:44.042 { 00:11:44.042 "name": "BaseBdev2", 00:11:44.042 "uuid": "a189538e-c58e-41bf-bf7c-614177a4f87a", 00:11:44.042 "is_configured": true, 00:11:44.042 "data_offset": 0, 00:11:44.042 "data_size": 65536 00:11:44.042 }, 00:11:44.042 { 00:11:44.042 "name": "BaseBdev3", 00:11:44.042 "uuid": "6157063e-d211-4dca-8c98-23754306d728", 00:11:44.042 "is_configured": true, 00:11:44.042 "data_offset": 0, 00:11:44.042 "data_size": 65536 00:11:44.042 } 00:11:44.042 ] 00:11:44.042 }' 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.042 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.610 [2024-11-20 07:08:26.644211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.610 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.610 [2024-11-20 07:08:26.802710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.610 [2024-11-20 07:08:26.802846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.870 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.870 BaseBdev2 00:11:44.870 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.870 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:44.870 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 [ 00:11:44.871 { 00:11:44.871 "name": "BaseBdev2", 00:11:44.871 "aliases": [ 00:11:44.871 "2fe835f9-e42a-4d2d-aad5-fc74d3a00060" 00:11:44.871 ], 00:11:44.871 "product_name": "Malloc disk", 00:11:44.871 "block_size": 512, 00:11:44.871 "num_blocks": 65536, 00:11:44.871 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:44.871 "assigned_rate_limits": { 00:11:44.871 "rw_ios_per_sec": 0, 00:11:44.871 "rw_mbytes_per_sec": 0, 00:11:44.871 "r_mbytes_per_sec": 0, 00:11:44.871 "w_mbytes_per_sec": 0 00:11:44.871 }, 00:11:44.871 "claimed": false, 00:11:44.871 "zoned": false, 00:11:44.871 "supported_io_types": { 00:11:44.871 "read": true, 00:11:44.871 "write": true, 00:11:44.871 "unmap": true, 00:11:44.871 "flush": true, 00:11:44.871 "reset": true, 00:11:44.871 "nvme_admin": false, 00:11:44.871 "nvme_io": false, 00:11:44.871 "nvme_io_md": false, 00:11:44.871 "write_zeroes": true, 00:11:44.871 "zcopy": true, 00:11:44.871 "get_zone_info": false, 00:11:44.871 "zone_management": false, 00:11:44.871 "zone_append": false, 00:11:44.871 "compare": false, 00:11:44.871 "compare_and_write": false, 00:11:44.871 "abort": true, 00:11:44.871 "seek_hole": false, 00:11:44.871 "seek_data": false, 00:11:44.871 "copy": true, 00:11:44.871 "nvme_iov_md": false 00:11:44.871 }, 00:11:44.871 "memory_domains": [ 00:11:44.871 { 00:11:44.871 "dma_device_id": "system", 00:11:44.871 "dma_device_type": 1 00:11:44.871 }, 00:11:44.871 { 00:11:44.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.871 "dma_device_type": 2 00:11:44.871 } 00:11:44.871 ], 00:11:44.871 "driver_specific": {} 00:11:44.871 } 00:11:44.871 ] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 BaseBdev3 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 [ 00:11:44.871 { 00:11:44.871 "name": "BaseBdev3", 00:11:44.871 "aliases": [ 00:11:44.871 "5c63e225-e800-4bb1-970d-ce1ded75e665" 00:11:44.871 ], 00:11:44.871 "product_name": "Malloc disk", 00:11:44.871 "block_size": 512, 00:11:44.871 "num_blocks": 65536, 00:11:44.871 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:44.871 "assigned_rate_limits": { 00:11:44.871 "rw_ios_per_sec": 0, 00:11:44.871 "rw_mbytes_per_sec": 0, 00:11:44.871 "r_mbytes_per_sec": 0, 00:11:44.871 "w_mbytes_per_sec": 0 00:11:44.871 }, 00:11:44.871 "claimed": false, 00:11:44.871 "zoned": false, 00:11:44.871 "supported_io_types": { 00:11:44.871 "read": true, 00:11:44.871 "write": true, 00:11:44.871 "unmap": true, 00:11:44.871 "flush": true, 00:11:44.871 "reset": true, 00:11:44.871 "nvme_admin": false, 00:11:44.871 "nvme_io": false, 00:11:44.871 "nvme_io_md": false, 00:11:44.871 "write_zeroes": true, 00:11:44.871 "zcopy": true, 00:11:44.871 "get_zone_info": false, 00:11:44.871 "zone_management": false, 00:11:44.871 "zone_append": false, 00:11:44.871 "compare": false, 00:11:44.871 "compare_and_write": false, 00:11:44.871 "abort": true, 00:11:44.871 "seek_hole": false, 00:11:44.871 "seek_data": false, 00:11:44.871 "copy": true, 00:11:44.871 "nvme_iov_md": false 00:11:44.871 }, 00:11:44.871 "memory_domains": [ 00:11:44.871 { 00:11:44.871 "dma_device_id": "system", 00:11:44.871 "dma_device_type": 1 00:11:44.871 }, 00:11:44.871 { 00:11:44.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.871 "dma_device_type": 2 00:11:44.871 } 00:11:44.871 ], 00:11:44.871 "driver_specific": {} 00:11:44.871 } 00:11:44.871 ] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.871 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.871 [2024-11-20 07:08:27.129794] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.871 [2024-11-20 07:08:27.129913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.871 [2024-11-20 07:08:27.129954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.871 [2024-11-20 07:08:27.132141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.131 "name": "Existed_Raid", 00:11:45.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.131 "strip_size_kb": 64, 00:11:45.131 "state": "configuring", 00:11:45.131 "raid_level": "raid0", 00:11:45.131 "superblock": false, 00:11:45.131 "num_base_bdevs": 3, 00:11:45.131 "num_base_bdevs_discovered": 2, 00:11:45.131 "num_base_bdevs_operational": 3, 00:11:45.131 "base_bdevs_list": [ 00:11:45.131 { 00:11:45.131 "name": "BaseBdev1", 00:11:45.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.131 "is_configured": false, 00:11:45.131 "data_offset": 0, 00:11:45.131 "data_size": 0 00:11:45.131 }, 00:11:45.131 { 00:11:45.131 "name": "BaseBdev2", 00:11:45.131 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:45.131 "is_configured": true, 00:11:45.131 "data_offset": 0, 00:11:45.131 "data_size": 65536 00:11:45.131 }, 00:11:45.131 { 00:11:45.131 "name": "BaseBdev3", 00:11:45.131 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:45.131 "is_configured": true, 00:11:45.131 "data_offset": 0, 00:11:45.131 "data_size": 65536 00:11:45.131 } 00:11:45.131 ] 00:11:45.131 }' 00:11:45.131 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.132 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.392 [2024-11-20 07:08:27.569103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.392 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.392 "name": "Existed_Raid", 00:11:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.392 "strip_size_kb": 64, 00:11:45.392 "state": "configuring", 00:11:45.393 "raid_level": "raid0", 00:11:45.393 "superblock": false, 00:11:45.393 "num_base_bdevs": 3, 00:11:45.393 "num_base_bdevs_discovered": 1, 00:11:45.393 "num_base_bdevs_operational": 3, 00:11:45.393 "base_bdevs_list": [ 00:11:45.393 { 00:11:45.393 "name": "BaseBdev1", 00:11:45.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.393 "is_configured": false, 00:11:45.393 "data_offset": 0, 00:11:45.393 "data_size": 0 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "name": null, 00:11:45.393 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:45.393 "is_configured": false, 00:11:45.393 "data_offset": 0, 00:11:45.393 "data_size": 65536 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "name": "BaseBdev3", 00:11:45.393 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:45.393 "is_configured": true, 00:11:45.393 "data_offset": 0, 00:11:45.393 "data_size": 65536 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 }' 00:11:45.393 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.393 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 [2024-11-20 07:08:28.098811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.979 BaseBdev1 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 [ 00:11:45.979 { 00:11:45.979 "name": "BaseBdev1", 00:11:45.979 "aliases": [ 00:11:45.979 "90b7d13f-7fee-4c0a-96cf-b95e060edde2" 00:11:45.979 ], 00:11:45.979 "product_name": "Malloc disk", 00:11:45.979 "block_size": 512, 00:11:45.979 "num_blocks": 65536, 00:11:45.979 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:45.979 "assigned_rate_limits": { 00:11:45.979 "rw_ios_per_sec": 0, 00:11:45.979 "rw_mbytes_per_sec": 0, 00:11:45.979 "r_mbytes_per_sec": 0, 00:11:45.979 "w_mbytes_per_sec": 0 00:11:45.979 }, 00:11:45.979 "claimed": true, 00:11:45.979 "claim_type": "exclusive_write", 00:11:45.979 "zoned": false, 00:11:45.979 "supported_io_types": { 00:11:45.979 "read": true, 00:11:45.979 "write": true, 00:11:45.979 "unmap": true, 00:11:45.979 "flush": true, 00:11:45.979 "reset": true, 00:11:45.979 "nvme_admin": false, 00:11:45.979 "nvme_io": false, 00:11:45.979 "nvme_io_md": false, 00:11:45.979 "write_zeroes": true, 00:11:45.979 "zcopy": true, 00:11:45.979 "get_zone_info": false, 00:11:45.979 "zone_management": false, 00:11:45.979 "zone_append": false, 00:11:45.979 "compare": false, 00:11:45.979 "compare_and_write": false, 00:11:45.979 "abort": true, 00:11:45.979 "seek_hole": false, 00:11:45.979 "seek_data": false, 00:11:45.979 "copy": true, 00:11:45.979 "nvme_iov_md": false 00:11:45.979 }, 00:11:45.979 "memory_domains": [ 00:11:45.979 { 00:11:45.979 "dma_device_id": "system", 00:11:45.979 "dma_device_type": 1 00:11:45.979 }, 00:11:45.979 { 00:11:45.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.979 "dma_device_type": 2 00:11:45.979 } 00:11:45.979 ], 00:11:45.979 "driver_specific": {} 00:11:45.979 } 00:11:45.979 ] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.979 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.980 "name": "Existed_Raid", 00:11:45.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.980 "strip_size_kb": 64, 00:11:45.980 "state": "configuring", 00:11:45.980 "raid_level": "raid0", 00:11:45.980 "superblock": false, 00:11:45.980 "num_base_bdevs": 3, 00:11:45.980 "num_base_bdevs_discovered": 2, 00:11:45.980 "num_base_bdevs_operational": 3, 00:11:45.980 "base_bdevs_list": [ 00:11:45.980 { 00:11:45.980 "name": "BaseBdev1", 00:11:45.980 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 0, 00:11:45.980 "data_size": 65536 00:11:45.980 }, 00:11:45.980 { 00:11:45.980 "name": null, 00:11:45.980 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:45.980 "is_configured": false, 00:11:45.980 "data_offset": 0, 00:11:45.980 "data_size": 65536 00:11:45.980 }, 00:11:45.980 { 00:11:45.980 "name": "BaseBdev3", 00:11:45.980 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 0, 00:11:45.980 "data_size": 65536 00:11:45.980 } 00:11:45.980 ] 00:11:45.980 }' 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.980 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.543 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 [2024-11-20 07:08:28.653990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.544 "name": "Existed_Raid", 00:11:46.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.544 "strip_size_kb": 64, 00:11:46.544 "state": "configuring", 00:11:46.544 "raid_level": "raid0", 00:11:46.544 "superblock": false, 00:11:46.544 "num_base_bdevs": 3, 00:11:46.544 "num_base_bdevs_discovered": 1, 00:11:46.544 "num_base_bdevs_operational": 3, 00:11:46.544 "base_bdevs_list": [ 00:11:46.544 { 00:11:46.544 "name": "BaseBdev1", 00:11:46.544 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:46.544 "is_configured": true, 00:11:46.544 "data_offset": 0, 00:11:46.544 "data_size": 65536 00:11:46.544 }, 00:11:46.544 { 00:11:46.544 "name": null, 00:11:46.544 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:46.544 "is_configured": false, 00:11:46.544 "data_offset": 0, 00:11:46.544 "data_size": 65536 00:11:46.544 }, 00:11:46.544 { 00:11:46.544 "name": null, 00:11:46.544 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:46.544 "is_configured": false, 00:11:46.544 "data_offset": 0, 00:11:46.544 "data_size": 65536 00:11:46.544 } 00:11:46.544 ] 00:11:46.544 }' 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.544 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 [2024-11-20 07:08:29.165227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.129 "name": "Existed_Raid", 00:11:47.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.129 "strip_size_kb": 64, 00:11:47.129 "state": "configuring", 00:11:47.129 "raid_level": "raid0", 00:11:47.129 "superblock": false, 00:11:47.129 "num_base_bdevs": 3, 00:11:47.129 "num_base_bdevs_discovered": 2, 00:11:47.129 "num_base_bdevs_operational": 3, 00:11:47.129 "base_bdevs_list": [ 00:11:47.129 { 00:11:47.129 "name": "BaseBdev1", 00:11:47.129 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:47.129 "is_configured": true, 00:11:47.129 "data_offset": 0, 00:11:47.129 "data_size": 65536 00:11:47.129 }, 00:11:47.129 { 00:11:47.129 "name": null, 00:11:47.129 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:47.129 "is_configured": false, 00:11:47.129 "data_offset": 0, 00:11:47.129 "data_size": 65536 00:11:47.133 }, 00:11:47.133 { 00:11:47.133 "name": "BaseBdev3", 00:11:47.133 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:47.133 "is_configured": true, 00:11:47.133 "data_offset": 0, 00:11:47.133 "data_size": 65536 00:11:47.133 } 00:11:47.133 ] 00:11:47.133 }' 00:11:47.133 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.133 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.698 [2024-11-20 07:08:29.716263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.698 "name": "Existed_Raid", 00:11:47.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.698 "strip_size_kb": 64, 00:11:47.698 "state": "configuring", 00:11:47.698 "raid_level": "raid0", 00:11:47.698 "superblock": false, 00:11:47.698 "num_base_bdevs": 3, 00:11:47.698 "num_base_bdevs_discovered": 1, 00:11:47.698 "num_base_bdevs_operational": 3, 00:11:47.698 "base_bdevs_list": [ 00:11:47.698 { 00:11:47.698 "name": null, 00:11:47.698 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:47.698 "is_configured": false, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "name": null, 00:11:47.698 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:47.698 "is_configured": false, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "name": "BaseBdev3", 00:11:47.698 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:47.698 "is_configured": true, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 } 00:11:47.698 ] 00:11:47.698 }' 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.698 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.263 [2024-11-20 07:08:30.333162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.263 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.264 "name": "Existed_Raid", 00:11:48.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.264 "strip_size_kb": 64, 00:11:48.264 "state": "configuring", 00:11:48.264 "raid_level": "raid0", 00:11:48.264 "superblock": false, 00:11:48.264 "num_base_bdevs": 3, 00:11:48.264 "num_base_bdevs_discovered": 2, 00:11:48.264 "num_base_bdevs_operational": 3, 00:11:48.264 "base_bdevs_list": [ 00:11:48.264 { 00:11:48.264 "name": null, 00:11:48.264 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:48.264 "is_configured": false, 00:11:48.264 "data_offset": 0, 00:11:48.264 "data_size": 65536 00:11:48.264 }, 00:11:48.264 { 00:11:48.264 "name": "BaseBdev2", 00:11:48.264 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:48.264 "is_configured": true, 00:11:48.264 "data_offset": 0, 00:11:48.264 "data_size": 65536 00:11:48.264 }, 00:11:48.264 { 00:11:48.264 "name": "BaseBdev3", 00:11:48.264 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:48.264 "is_configured": true, 00:11:48.264 "data_offset": 0, 00:11:48.264 "data_size": 65536 00:11:48.264 } 00:11:48.264 ] 00:11:48.264 }' 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.264 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90b7d13f-7fee-4c0a-96cf-b95e060edde2 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 [2024-11-20 07:08:30.928951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:48.831 [2024-11-20 07:08:30.929068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.831 [2024-11-20 07:08:30.929112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:48.831 [2024-11-20 07:08:30.929456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:48.831 [2024-11-20 07:08:30.929671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.831 [2024-11-20 07:08:30.929720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:48.831 [2024-11-20 07:08:30.930030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.831 NewBaseBdev 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 [ 00:11:48.831 { 00:11:48.831 "name": "NewBaseBdev", 00:11:48.831 "aliases": [ 00:11:48.831 "90b7d13f-7fee-4c0a-96cf-b95e060edde2" 00:11:48.831 ], 00:11:48.831 "product_name": "Malloc disk", 00:11:48.831 "block_size": 512, 00:11:48.831 "num_blocks": 65536, 00:11:48.831 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:48.831 "assigned_rate_limits": { 00:11:48.831 "rw_ios_per_sec": 0, 00:11:48.831 "rw_mbytes_per_sec": 0, 00:11:48.831 "r_mbytes_per_sec": 0, 00:11:48.831 "w_mbytes_per_sec": 0 00:11:48.831 }, 00:11:48.831 "claimed": true, 00:11:48.831 "claim_type": "exclusive_write", 00:11:48.831 "zoned": false, 00:11:48.831 "supported_io_types": { 00:11:48.831 "read": true, 00:11:48.831 "write": true, 00:11:48.831 "unmap": true, 00:11:48.831 "flush": true, 00:11:48.831 "reset": true, 00:11:48.831 "nvme_admin": false, 00:11:48.831 "nvme_io": false, 00:11:48.831 "nvme_io_md": false, 00:11:48.831 "write_zeroes": true, 00:11:48.831 "zcopy": true, 00:11:48.831 "get_zone_info": false, 00:11:48.831 "zone_management": false, 00:11:48.831 "zone_append": false, 00:11:48.831 "compare": false, 00:11:48.831 "compare_and_write": false, 00:11:48.831 "abort": true, 00:11:48.831 "seek_hole": false, 00:11:48.831 "seek_data": false, 00:11:48.831 "copy": true, 00:11:48.831 "nvme_iov_md": false 00:11:48.831 }, 00:11:48.831 "memory_domains": [ 00:11:48.831 { 00:11:48.831 "dma_device_id": "system", 00:11:48.831 "dma_device_type": 1 00:11:48.831 }, 00:11:48.831 { 00:11:48.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.831 "dma_device_type": 2 00:11:48.831 } 00:11:48.831 ], 00:11:48.831 "driver_specific": {} 00:11:48.831 } 00:11:48.831 ] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 07:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.831 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.831 "name": "Existed_Raid", 00:11:48.831 "uuid": "79b02fc3-6d3a-45bd-a48c-97115cea9208", 00:11:48.831 "strip_size_kb": 64, 00:11:48.831 "state": "online", 00:11:48.831 "raid_level": "raid0", 00:11:48.831 "superblock": false, 00:11:48.831 "num_base_bdevs": 3, 00:11:48.831 "num_base_bdevs_discovered": 3, 00:11:48.831 "num_base_bdevs_operational": 3, 00:11:48.831 "base_bdevs_list": [ 00:11:48.831 { 00:11:48.831 "name": "NewBaseBdev", 00:11:48.831 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:48.831 "is_configured": true, 00:11:48.831 "data_offset": 0, 00:11:48.831 "data_size": 65536 00:11:48.831 }, 00:11:48.831 { 00:11:48.831 "name": "BaseBdev2", 00:11:48.831 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:48.831 "is_configured": true, 00:11:48.831 "data_offset": 0, 00:11:48.831 "data_size": 65536 00:11:48.831 }, 00:11:48.831 { 00:11:48.831 "name": "BaseBdev3", 00:11:48.831 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:48.831 "is_configured": true, 00:11:48.831 "data_offset": 0, 00:11:48.831 "data_size": 65536 00:11:48.831 } 00:11:48.831 ] 00:11:48.831 }' 00:11:48.831 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.831 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.399 [2024-11-20 07:08:31.460447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.399 "name": "Existed_Raid", 00:11:49.399 "aliases": [ 00:11:49.399 "79b02fc3-6d3a-45bd-a48c-97115cea9208" 00:11:49.399 ], 00:11:49.399 "product_name": "Raid Volume", 00:11:49.399 "block_size": 512, 00:11:49.399 "num_blocks": 196608, 00:11:49.399 "uuid": "79b02fc3-6d3a-45bd-a48c-97115cea9208", 00:11:49.399 "assigned_rate_limits": { 00:11:49.399 "rw_ios_per_sec": 0, 00:11:49.399 "rw_mbytes_per_sec": 0, 00:11:49.399 "r_mbytes_per_sec": 0, 00:11:49.399 "w_mbytes_per_sec": 0 00:11:49.399 }, 00:11:49.399 "claimed": false, 00:11:49.399 "zoned": false, 00:11:49.399 "supported_io_types": { 00:11:49.399 "read": true, 00:11:49.399 "write": true, 00:11:49.399 "unmap": true, 00:11:49.399 "flush": true, 00:11:49.399 "reset": true, 00:11:49.399 "nvme_admin": false, 00:11:49.399 "nvme_io": false, 00:11:49.399 "nvme_io_md": false, 00:11:49.399 "write_zeroes": true, 00:11:49.399 "zcopy": false, 00:11:49.399 "get_zone_info": false, 00:11:49.399 "zone_management": false, 00:11:49.399 "zone_append": false, 00:11:49.399 "compare": false, 00:11:49.399 "compare_and_write": false, 00:11:49.399 "abort": false, 00:11:49.399 "seek_hole": false, 00:11:49.399 "seek_data": false, 00:11:49.399 "copy": false, 00:11:49.399 "nvme_iov_md": false 00:11:49.399 }, 00:11:49.399 "memory_domains": [ 00:11:49.399 { 00:11:49.399 "dma_device_id": "system", 00:11:49.399 "dma_device_type": 1 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.399 "dma_device_type": 2 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "dma_device_id": "system", 00:11:49.399 "dma_device_type": 1 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.399 "dma_device_type": 2 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "dma_device_id": "system", 00:11:49.399 "dma_device_type": 1 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.399 "dma_device_type": 2 00:11:49.399 } 00:11:49.399 ], 00:11:49.399 "driver_specific": { 00:11:49.399 "raid": { 00:11:49.399 "uuid": "79b02fc3-6d3a-45bd-a48c-97115cea9208", 00:11:49.399 "strip_size_kb": 64, 00:11:49.399 "state": "online", 00:11:49.399 "raid_level": "raid0", 00:11:49.399 "superblock": false, 00:11:49.399 "num_base_bdevs": 3, 00:11:49.399 "num_base_bdevs_discovered": 3, 00:11:49.399 "num_base_bdevs_operational": 3, 00:11:49.399 "base_bdevs_list": [ 00:11:49.399 { 00:11:49.399 "name": "NewBaseBdev", 00:11:49.399 "uuid": "90b7d13f-7fee-4c0a-96cf-b95e060edde2", 00:11:49.399 "is_configured": true, 00:11:49.399 "data_offset": 0, 00:11:49.399 "data_size": 65536 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "name": "BaseBdev2", 00:11:49.399 "uuid": "2fe835f9-e42a-4d2d-aad5-fc74d3a00060", 00:11:49.399 "is_configured": true, 00:11:49.399 "data_offset": 0, 00:11:49.399 "data_size": 65536 00:11:49.399 }, 00:11:49.399 { 00:11:49.399 "name": "BaseBdev3", 00:11:49.399 "uuid": "5c63e225-e800-4bb1-970d-ce1ded75e665", 00:11:49.399 "is_configured": true, 00:11:49.399 "data_offset": 0, 00:11:49.399 "data_size": 65536 00:11:49.399 } 00:11:49.399 ] 00:11:49.399 } 00:11:49.399 } 00:11:49.399 }' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:49.399 BaseBdev2 00:11:49.399 BaseBdev3' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.399 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.400 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.659 [2024-11-20 07:08:31.763604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.659 [2024-11-20 07:08:31.763671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.659 [2024-11-20 07:08:31.763763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.659 [2024-11-20 07:08:31.763849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.659 [2024-11-20 07:08:31.763860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64115 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64115 ']' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64115 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64115 00:11:49.659 killing process with pid 64115 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64115' 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64115 00:11:49.659 [2024-11-20 07:08:31.806832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.659 07:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64115 00:11:49.918 [2024-11-20 07:08:32.138215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:51.375 00:11:51.375 real 0m11.109s 00:11:51.375 user 0m17.723s 00:11:51.375 sys 0m1.881s 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.375 ************************************ 00:11:51.375 END TEST raid_state_function_test 00:11:51.375 ************************************ 00:11:51.375 07:08:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:51.375 07:08:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:51.375 07:08:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.375 07:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.375 ************************************ 00:11:51.375 START TEST raid_state_function_test_sb 00:11:51.375 ************************************ 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64746 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64746' 00:11:51.375 Process raid pid: 64746 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64746 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64746 ']' 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.375 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.375 [2024-11-20 07:08:33.487990] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:51.375 [2024-11-20 07:08:33.488121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.634 [2024-11-20 07:08:33.666074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.634 [2024-11-20 07:08:33.784274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.892 [2024-11-20 07:08:34.000808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.892 [2024-11-20 07:08:34.000855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.150 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.150 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:52.150 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.150 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.150 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.150 [2024-11-20 07:08:34.356050] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.150 [2024-11-20 07:08:34.356109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.150 [2024-11-20 07:08:34.356122] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.150 [2024-11-20 07:08:34.356133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.150 [2024-11-20 07:08:34.356141] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.151 [2024-11-20 07:08:34.356151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.151 "name": "Existed_Raid", 00:11:52.151 "uuid": "91daa8be-7673-448f-8a79-8d2e6b25b739", 00:11:52.151 "strip_size_kb": 64, 00:11:52.151 "state": "configuring", 00:11:52.151 "raid_level": "raid0", 00:11:52.151 "superblock": true, 00:11:52.151 "num_base_bdevs": 3, 00:11:52.151 "num_base_bdevs_discovered": 0, 00:11:52.151 "num_base_bdevs_operational": 3, 00:11:52.151 "base_bdevs_list": [ 00:11:52.151 { 00:11:52.151 "name": "BaseBdev1", 00:11:52.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.151 "is_configured": false, 00:11:52.151 "data_offset": 0, 00:11:52.151 "data_size": 0 00:11:52.151 }, 00:11:52.151 { 00:11:52.151 "name": "BaseBdev2", 00:11:52.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.151 "is_configured": false, 00:11:52.151 "data_offset": 0, 00:11:52.151 "data_size": 0 00:11:52.151 }, 00:11:52.151 { 00:11:52.151 "name": "BaseBdev3", 00:11:52.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.151 "is_configured": false, 00:11:52.151 "data_offset": 0, 00:11:52.151 "data_size": 0 00:11:52.151 } 00:11:52.151 ] 00:11:52.151 }' 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.151 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 [2024-11-20 07:08:34.751303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.718 [2024-11-20 07:08:34.751363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 [2024-11-20 07:08:34.763303] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.718 [2024-11-20 07:08:34.763427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.718 [2024-11-20 07:08:34.763467] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.718 [2024-11-20 07:08:34.763492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.718 [2024-11-20 07:08:34.763523] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.718 [2024-11-20 07:08:34.763546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 [2024-11-20 07:08:34.815411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.718 BaseBdev1 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.719 [ 00:11:52.719 { 00:11:52.719 "name": "BaseBdev1", 00:11:52.719 "aliases": [ 00:11:52.719 "68d7fed4-10c1-49c0-a4a8-895bff60b429" 00:11:52.719 ], 00:11:52.719 "product_name": "Malloc disk", 00:11:52.719 "block_size": 512, 00:11:52.719 "num_blocks": 65536, 00:11:52.719 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:52.719 "assigned_rate_limits": { 00:11:52.719 "rw_ios_per_sec": 0, 00:11:52.719 "rw_mbytes_per_sec": 0, 00:11:52.719 "r_mbytes_per_sec": 0, 00:11:52.719 "w_mbytes_per_sec": 0 00:11:52.719 }, 00:11:52.719 "claimed": true, 00:11:52.719 "claim_type": "exclusive_write", 00:11:52.719 "zoned": false, 00:11:52.719 "supported_io_types": { 00:11:52.719 "read": true, 00:11:52.719 "write": true, 00:11:52.719 "unmap": true, 00:11:52.719 "flush": true, 00:11:52.719 "reset": true, 00:11:52.719 "nvme_admin": false, 00:11:52.719 "nvme_io": false, 00:11:52.719 "nvme_io_md": false, 00:11:52.719 "write_zeroes": true, 00:11:52.719 "zcopy": true, 00:11:52.719 "get_zone_info": false, 00:11:52.719 "zone_management": false, 00:11:52.719 "zone_append": false, 00:11:52.719 "compare": false, 00:11:52.719 "compare_and_write": false, 00:11:52.719 "abort": true, 00:11:52.719 "seek_hole": false, 00:11:52.719 "seek_data": false, 00:11:52.719 "copy": true, 00:11:52.719 "nvme_iov_md": false 00:11:52.719 }, 00:11:52.719 "memory_domains": [ 00:11:52.719 { 00:11:52.719 "dma_device_id": "system", 00:11:52.719 "dma_device_type": 1 00:11:52.719 }, 00:11:52.719 { 00:11:52.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.719 "dma_device_type": 2 00:11:52.719 } 00:11:52.719 ], 00:11:52.719 "driver_specific": {} 00:11:52.719 } 00:11:52.719 ] 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.719 "name": "Existed_Raid", 00:11:52.719 "uuid": "76e4ecf5-d923-49dc-8cb7-03aa3c74b63d", 00:11:52.719 "strip_size_kb": 64, 00:11:52.719 "state": "configuring", 00:11:52.719 "raid_level": "raid0", 00:11:52.719 "superblock": true, 00:11:52.719 "num_base_bdevs": 3, 00:11:52.719 "num_base_bdevs_discovered": 1, 00:11:52.719 "num_base_bdevs_operational": 3, 00:11:52.719 "base_bdevs_list": [ 00:11:52.719 { 00:11:52.719 "name": "BaseBdev1", 00:11:52.719 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:52.719 "is_configured": true, 00:11:52.719 "data_offset": 2048, 00:11:52.719 "data_size": 63488 00:11:52.719 }, 00:11:52.719 { 00:11:52.719 "name": "BaseBdev2", 00:11:52.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.719 "is_configured": false, 00:11:52.719 "data_offset": 0, 00:11:52.719 "data_size": 0 00:11:52.719 }, 00:11:52.719 { 00:11:52.719 "name": "BaseBdev3", 00:11:52.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.719 "is_configured": false, 00:11:52.719 "data_offset": 0, 00:11:52.719 "data_size": 0 00:11:52.719 } 00:11:52.719 ] 00:11:52.719 }' 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.719 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.289 [2024-11-20 07:08:35.274681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.289 [2024-11-20 07:08:35.274746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.289 [2024-11-20 07:08:35.286743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.289 [2024-11-20 07:08:35.288720] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.289 [2024-11-20 07:08:35.288769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.289 [2024-11-20 07:08:35.288779] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.289 [2024-11-20 07:08:35.288804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.289 "name": "Existed_Raid", 00:11:53.289 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:53.289 "strip_size_kb": 64, 00:11:53.289 "state": "configuring", 00:11:53.289 "raid_level": "raid0", 00:11:53.289 "superblock": true, 00:11:53.289 "num_base_bdevs": 3, 00:11:53.289 "num_base_bdevs_discovered": 1, 00:11:53.289 "num_base_bdevs_operational": 3, 00:11:53.289 "base_bdevs_list": [ 00:11:53.289 { 00:11:53.289 "name": "BaseBdev1", 00:11:53.289 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:53.289 "is_configured": true, 00:11:53.289 "data_offset": 2048, 00:11:53.289 "data_size": 63488 00:11:53.289 }, 00:11:53.289 { 00:11:53.289 "name": "BaseBdev2", 00:11:53.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.289 "is_configured": false, 00:11:53.289 "data_offset": 0, 00:11:53.289 "data_size": 0 00:11:53.289 }, 00:11:53.289 { 00:11:53.289 "name": "BaseBdev3", 00:11:53.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.289 "is_configured": false, 00:11:53.289 "data_offset": 0, 00:11:53.289 "data_size": 0 00:11:53.289 } 00:11:53.289 ] 00:11:53.289 }' 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.289 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.549 [2024-11-20 07:08:35.797875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.549 BaseBdev2 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.549 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.808 [ 00:11:53.808 { 00:11:53.808 "name": "BaseBdev2", 00:11:53.808 "aliases": [ 00:11:53.808 "df0a7e99-f6e4-4bae-89ab-f1e96365edaf" 00:11:53.809 ], 00:11:53.809 "product_name": "Malloc disk", 00:11:53.809 "block_size": 512, 00:11:53.809 "num_blocks": 65536, 00:11:53.809 "uuid": "df0a7e99-f6e4-4bae-89ab-f1e96365edaf", 00:11:53.809 "assigned_rate_limits": { 00:11:53.809 "rw_ios_per_sec": 0, 00:11:53.809 "rw_mbytes_per_sec": 0, 00:11:53.809 "r_mbytes_per_sec": 0, 00:11:53.809 "w_mbytes_per_sec": 0 00:11:53.809 }, 00:11:53.809 "claimed": true, 00:11:53.809 "claim_type": "exclusive_write", 00:11:53.809 "zoned": false, 00:11:53.809 "supported_io_types": { 00:11:53.809 "read": true, 00:11:53.809 "write": true, 00:11:53.809 "unmap": true, 00:11:53.809 "flush": true, 00:11:53.809 "reset": true, 00:11:53.809 "nvme_admin": false, 00:11:53.809 "nvme_io": false, 00:11:53.809 "nvme_io_md": false, 00:11:53.809 "write_zeroes": true, 00:11:53.809 "zcopy": true, 00:11:53.809 "get_zone_info": false, 00:11:53.809 "zone_management": false, 00:11:53.809 "zone_append": false, 00:11:53.809 "compare": false, 00:11:53.809 "compare_and_write": false, 00:11:53.809 "abort": true, 00:11:53.809 "seek_hole": false, 00:11:53.809 "seek_data": false, 00:11:53.809 "copy": true, 00:11:53.809 "nvme_iov_md": false 00:11:53.809 }, 00:11:53.809 "memory_domains": [ 00:11:53.809 { 00:11:53.809 "dma_device_id": "system", 00:11:53.809 "dma_device_type": 1 00:11:53.809 }, 00:11:53.809 { 00:11:53.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.809 "dma_device_type": 2 00:11:53.809 } 00:11:53.809 ], 00:11:53.809 "driver_specific": {} 00:11:53.809 } 00:11:53.809 ] 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.809 "name": "Existed_Raid", 00:11:53.809 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:53.809 "strip_size_kb": 64, 00:11:53.809 "state": "configuring", 00:11:53.809 "raid_level": "raid0", 00:11:53.809 "superblock": true, 00:11:53.809 "num_base_bdevs": 3, 00:11:53.809 "num_base_bdevs_discovered": 2, 00:11:53.809 "num_base_bdevs_operational": 3, 00:11:53.809 "base_bdevs_list": [ 00:11:53.809 { 00:11:53.809 "name": "BaseBdev1", 00:11:53.809 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:53.809 "is_configured": true, 00:11:53.809 "data_offset": 2048, 00:11:53.809 "data_size": 63488 00:11:53.809 }, 00:11:53.809 { 00:11:53.809 "name": "BaseBdev2", 00:11:53.809 "uuid": "df0a7e99-f6e4-4bae-89ab-f1e96365edaf", 00:11:53.809 "is_configured": true, 00:11:53.809 "data_offset": 2048, 00:11:53.809 "data_size": 63488 00:11:53.809 }, 00:11:53.809 { 00:11:53.809 "name": "BaseBdev3", 00:11:53.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.809 "is_configured": false, 00:11:53.809 "data_offset": 0, 00:11:53.809 "data_size": 0 00:11:53.809 } 00:11:53.809 ] 00:11:53.809 }' 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.809 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 [2024-11-20 07:08:36.311259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.069 [2024-11-20 07:08:36.311708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:54.069 [2024-11-20 07:08:36.311781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:54.069 BaseBdev3 00:11:54.069 [2024-11-20 07:08:36.312146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:54.069 [2024-11-20 07:08:36.312346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:54.069 [2024-11-20 07:08:36.312404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.069 [2024-11-20 07:08:36.312687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.069 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 [ 00:11:54.069 { 00:11:54.069 "name": "BaseBdev3", 00:11:54.069 "aliases": [ 00:11:54.069 "d17472e3-9ecd-43c7-b9dc-8420425f2316" 00:11:54.069 ], 00:11:54.069 "product_name": "Malloc disk", 00:11:54.069 "block_size": 512, 00:11:54.069 "num_blocks": 65536, 00:11:54.328 "uuid": "d17472e3-9ecd-43c7-b9dc-8420425f2316", 00:11:54.328 "assigned_rate_limits": { 00:11:54.328 "rw_ios_per_sec": 0, 00:11:54.328 "rw_mbytes_per_sec": 0, 00:11:54.328 "r_mbytes_per_sec": 0, 00:11:54.328 "w_mbytes_per_sec": 0 00:11:54.328 }, 00:11:54.328 "claimed": true, 00:11:54.328 "claim_type": "exclusive_write", 00:11:54.328 "zoned": false, 00:11:54.328 "supported_io_types": { 00:11:54.328 "read": true, 00:11:54.328 "write": true, 00:11:54.328 "unmap": true, 00:11:54.328 "flush": true, 00:11:54.328 "reset": true, 00:11:54.328 "nvme_admin": false, 00:11:54.328 "nvme_io": false, 00:11:54.328 "nvme_io_md": false, 00:11:54.328 "write_zeroes": true, 00:11:54.328 "zcopy": true, 00:11:54.328 "get_zone_info": false, 00:11:54.328 "zone_management": false, 00:11:54.328 "zone_append": false, 00:11:54.328 "compare": false, 00:11:54.328 "compare_and_write": false, 00:11:54.328 "abort": true, 00:11:54.328 "seek_hole": false, 00:11:54.328 "seek_data": false, 00:11:54.328 "copy": true, 00:11:54.328 "nvme_iov_md": false 00:11:54.328 }, 00:11:54.328 "memory_domains": [ 00:11:54.328 { 00:11:54.328 "dma_device_id": "system", 00:11:54.328 "dma_device_type": 1 00:11:54.328 }, 00:11:54.328 { 00:11:54.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.328 "dma_device_type": 2 00:11:54.328 } 00:11:54.328 ], 00:11:54.328 "driver_specific": {} 00:11:54.328 } 00:11:54.328 ] 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.328 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.328 "name": "Existed_Raid", 00:11:54.328 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:54.328 "strip_size_kb": 64, 00:11:54.328 "state": "online", 00:11:54.328 "raid_level": "raid0", 00:11:54.328 "superblock": true, 00:11:54.328 "num_base_bdevs": 3, 00:11:54.328 "num_base_bdevs_discovered": 3, 00:11:54.328 "num_base_bdevs_operational": 3, 00:11:54.328 "base_bdevs_list": [ 00:11:54.328 { 00:11:54.328 "name": "BaseBdev1", 00:11:54.328 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:54.328 "is_configured": true, 00:11:54.328 "data_offset": 2048, 00:11:54.329 "data_size": 63488 00:11:54.329 }, 00:11:54.329 { 00:11:54.329 "name": "BaseBdev2", 00:11:54.329 "uuid": "df0a7e99-f6e4-4bae-89ab-f1e96365edaf", 00:11:54.329 "is_configured": true, 00:11:54.329 "data_offset": 2048, 00:11:54.329 "data_size": 63488 00:11:54.329 }, 00:11:54.329 { 00:11:54.329 "name": "BaseBdev3", 00:11:54.329 "uuid": "d17472e3-9ecd-43c7-b9dc-8420425f2316", 00:11:54.329 "is_configured": true, 00:11:54.329 "data_offset": 2048, 00:11:54.329 "data_size": 63488 00:11:54.329 } 00:11:54.329 ] 00:11:54.329 }' 00:11:54.329 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.329 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.588 [2024-11-20 07:08:36.734925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.588 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.588 "name": "Existed_Raid", 00:11:54.588 "aliases": [ 00:11:54.588 "8a1a6c9a-e824-4061-b072-e86db359e2bb" 00:11:54.588 ], 00:11:54.588 "product_name": "Raid Volume", 00:11:54.588 "block_size": 512, 00:11:54.588 "num_blocks": 190464, 00:11:54.588 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:54.588 "assigned_rate_limits": { 00:11:54.588 "rw_ios_per_sec": 0, 00:11:54.588 "rw_mbytes_per_sec": 0, 00:11:54.588 "r_mbytes_per_sec": 0, 00:11:54.588 "w_mbytes_per_sec": 0 00:11:54.588 }, 00:11:54.588 "claimed": false, 00:11:54.588 "zoned": false, 00:11:54.588 "supported_io_types": { 00:11:54.588 "read": true, 00:11:54.588 "write": true, 00:11:54.588 "unmap": true, 00:11:54.588 "flush": true, 00:11:54.588 "reset": true, 00:11:54.588 "nvme_admin": false, 00:11:54.588 "nvme_io": false, 00:11:54.588 "nvme_io_md": false, 00:11:54.588 "write_zeroes": true, 00:11:54.588 "zcopy": false, 00:11:54.588 "get_zone_info": false, 00:11:54.588 "zone_management": false, 00:11:54.588 "zone_append": false, 00:11:54.588 "compare": false, 00:11:54.588 "compare_and_write": false, 00:11:54.588 "abort": false, 00:11:54.588 "seek_hole": false, 00:11:54.588 "seek_data": false, 00:11:54.588 "copy": false, 00:11:54.588 "nvme_iov_md": false 00:11:54.588 }, 00:11:54.588 "memory_domains": [ 00:11:54.588 { 00:11:54.588 "dma_device_id": "system", 00:11:54.588 "dma_device_type": 1 00:11:54.588 }, 00:11:54.588 { 00:11:54.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.588 "dma_device_type": 2 00:11:54.588 }, 00:11:54.588 { 00:11:54.588 "dma_device_id": "system", 00:11:54.588 "dma_device_type": 1 00:11:54.588 }, 00:11:54.588 { 00:11:54.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.588 "dma_device_type": 2 00:11:54.588 }, 00:11:54.588 { 00:11:54.588 "dma_device_id": "system", 00:11:54.588 "dma_device_type": 1 00:11:54.588 }, 00:11:54.588 { 00:11:54.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.588 "dma_device_type": 2 00:11:54.588 } 00:11:54.588 ], 00:11:54.588 "driver_specific": { 00:11:54.588 "raid": { 00:11:54.588 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:54.588 "strip_size_kb": 64, 00:11:54.588 "state": "online", 00:11:54.588 "raid_level": "raid0", 00:11:54.588 "superblock": true, 00:11:54.588 "num_base_bdevs": 3, 00:11:54.588 "num_base_bdevs_discovered": 3, 00:11:54.588 "num_base_bdevs_operational": 3, 00:11:54.588 "base_bdevs_list": [ 00:11:54.588 { 00:11:54.588 "name": "BaseBdev1", 00:11:54.588 "uuid": "68d7fed4-10c1-49c0-a4a8-895bff60b429", 00:11:54.589 "is_configured": true, 00:11:54.589 "data_offset": 2048, 00:11:54.589 "data_size": 63488 00:11:54.589 }, 00:11:54.589 { 00:11:54.589 "name": "BaseBdev2", 00:11:54.589 "uuid": "df0a7e99-f6e4-4bae-89ab-f1e96365edaf", 00:11:54.589 "is_configured": true, 00:11:54.589 "data_offset": 2048, 00:11:54.589 "data_size": 63488 00:11:54.589 }, 00:11:54.589 { 00:11:54.589 "name": "BaseBdev3", 00:11:54.589 "uuid": "d17472e3-9ecd-43c7-b9dc-8420425f2316", 00:11:54.589 "is_configured": true, 00:11:54.589 "data_offset": 2048, 00:11:54.589 "data_size": 63488 00:11:54.589 } 00:11:54.589 ] 00:11:54.589 } 00:11:54.589 } 00:11:54.589 }' 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.589 BaseBdev2 00:11:54.589 BaseBdev3' 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.589 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.848 07:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.848 [2024-11-20 07:08:36.998217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.848 [2024-11-20 07:08:36.998247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.848 [2024-11-20 07:08:36.998306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.848 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.848 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.848 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:54.848 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.849 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.108 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.108 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.108 "name": "Existed_Raid", 00:11:55.108 "uuid": "8a1a6c9a-e824-4061-b072-e86db359e2bb", 00:11:55.108 "strip_size_kb": 64, 00:11:55.108 "state": "offline", 00:11:55.108 "raid_level": "raid0", 00:11:55.108 "superblock": true, 00:11:55.108 "num_base_bdevs": 3, 00:11:55.108 "num_base_bdevs_discovered": 2, 00:11:55.108 "num_base_bdevs_operational": 2, 00:11:55.108 "base_bdevs_list": [ 00:11:55.108 { 00:11:55.108 "name": null, 00:11:55.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.108 "is_configured": false, 00:11:55.109 "data_offset": 0, 00:11:55.109 "data_size": 63488 00:11:55.109 }, 00:11:55.109 { 00:11:55.109 "name": "BaseBdev2", 00:11:55.109 "uuid": "df0a7e99-f6e4-4bae-89ab-f1e96365edaf", 00:11:55.109 "is_configured": true, 00:11:55.109 "data_offset": 2048, 00:11:55.109 "data_size": 63488 00:11:55.109 }, 00:11:55.109 { 00:11:55.109 "name": "BaseBdev3", 00:11:55.109 "uuid": "d17472e3-9ecd-43c7-b9dc-8420425f2316", 00:11:55.109 "is_configured": true, 00:11:55.109 "data_offset": 2048, 00:11:55.109 "data_size": 63488 00:11:55.109 } 00:11:55.109 ] 00:11:55.109 }' 00:11:55.109 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.109 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.368 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 [2024-11-20 07:08:37.583486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.627 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.628 [2024-11-20 07:08:37.752453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.628 [2024-11-20 07:08:37.752512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.628 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.888 BaseBdev2 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.888 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.888 [ 00:11:55.888 { 00:11:55.888 "name": "BaseBdev2", 00:11:55.888 "aliases": [ 00:11:55.888 "40e724a0-2f5f-4b2d-9cab-b0c773df1da6" 00:11:55.888 ], 00:11:55.888 "product_name": "Malloc disk", 00:11:55.888 "block_size": 512, 00:11:55.888 "num_blocks": 65536, 00:11:55.888 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:55.888 "assigned_rate_limits": { 00:11:55.889 "rw_ios_per_sec": 0, 00:11:55.889 "rw_mbytes_per_sec": 0, 00:11:55.889 "r_mbytes_per_sec": 0, 00:11:55.889 "w_mbytes_per_sec": 0 00:11:55.889 }, 00:11:55.889 "claimed": false, 00:11:55.889 "zoned": false, 00:11:55.889 "supported_io_types": { 00:11:55.889 "read": true, 00:11:55.889 "write": true, 00:11:55.889 "unmap": true, 00:11:55.889 "flush": true, 00:11:55.889 "reset": true, 00:11:55.889 "nvme_admin": false, 00:11:55.889 "nvme_io": false, 00:11:55.889 "nvme_io_md": false, 00:11:55.889 "write_zeroes": true, 00:11:55.889 "zcopy": true, 00:11:55.889 "get_zone_info": false, 00:11:55.889 "zone_management": false, 00:11:55.889 "zone_append": false, 00:11:55.889 "compare": false, 00:11:55.889 "compare_and_write": false, 00:11:55.889 "abort": true, 00:11:55.889 "seek_hole": false, 00:11:55.889 "seek_data": false, 00:11:55.889 "copy": true, 00:11:55.889 "nvme_iov_md": false 00:11:55.889 }, 00:11:55.889 "memory_domains": [ 00:11:55.889 { 00:11:55.889 "dma_device_id": "system", 00:11:55.889 "dma_device_type": 1 00:11:55.889 }, 00:11:55.889 { 00:11:55.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.889 "dma_device_type": 2 00:11:55.889 } 00:11:55.889 ], 00:11:55.889 "driver_specific": {} 00:11:55.889 } 00:11:55.889 ] 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.889 07:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 BaseBdev3 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 [ 00:11:55.889 { 00:11:55.889 "name": "BaseBdev3", 00:11:55.889 "aliases": [ 00:11:55.889 "8d4a516b-54cb-436d-b1c6-bb29593c3e4c" 00:11:55.889 ], 00:11:55.889 "product_name": "Malloc disk", 00:11:55.889 "block_size": 512, 00:11:55.889 "num_blocks": 65536, 00:11:55.889 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:55.889 "assigned_rate_limits": { 00:11:55.889 "rw_ios_per_sec": 0, 00:11:55.889 "rw_mbytes_per_sec": 0, 00:11:55.889 "r_mbytes_per_sec": 0, 00:11:55.889 "w_mbytes_per_sec": 0 00:11:55.889 }, 00:11:55.889 "claimed": false, 00:11:55.889 "zoned": false, 00:11:55.889 "supported_io_types": { 00:11:55.889 "read": true, 00:11:55.889 "write": true, 00:11:55.889 "unmap": true, 00:11:55.889 "flush": true, 00:11:55.889 "reset": true, 00:11:55.889 "nvme_admin": false, 00:11:55.889 "nvme_io": false, 00:11:55.889 "nvme_io_md": false, 00:11:55.889 "write_zeroes": true, 00:11:55.889 "zcopy": true, 00:11:55.889 "get_zone_info": false, 00:11:55.889 "zone_management": false, 00:11:55.889 "zone_append": false, 00:11:55.889 "compare": false, 00:11:55.889 "compare_and_write": false, 00:11:55.889 "abort": true, 00:11:55.889 "seek_hole": false, 00:11:55.889 "seek_data": false, 00:11:55.889 "copy": true, 00:11:55.889 "nvme_iov_md": false 00:11:55.889 }, 00:11:55.889 "memory_domains": [ 00:11:55.889 { 00:11:55.889 "dma_device_id": "system", 00:11:55.889 "dma_device_type": 1 00:11:55.889 }, 00:11:55.889 { 00:11:55.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.889 "dma_device_type": 2 00:11:55.889 } 00:11:55.889 ], 00:11:55.889 "driver_specific": {} 00:11:55.889 } 00:11:55.889 ] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 [2024-11-20 07:08:38.072107] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.889 [2024-11-20 07:08:38.072221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.889 [2024-11-20 07:08:38.072292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.889 [2024-11-20 07:08:38.074258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.889 "name": "Existed_Raid", 00:11:55.889 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:55.889 "strip_size_kb": 64, 00:11:55.889 "state": "configuring", 00:11:55.889 "raid_level": "raid0", 00:11:55.889 "superblock": true, 00:11:55.889 "num_base_bdevs": 3, 00:11:55.889 "num_base_bdevs_discovered": 2, 00:11:55.889 "num_base_bdevs_operational": 3, 00:11:55.889 "base_bdevs_list": [ 00:11:55.889 { 00:11:55.889 "name": "BaseBdev1", 00:11:55.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.889 "is_configured": false, 00:11:55.889 "data_offset": 0, 00:11:55.889 "data_size": 0 00:11:55.889 }, 00:11:55.889 { 00:11:55.889 "name": "BaseBdev2", 00:11:55.889 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:55.889 "is_configured": true, 00:11:55.889 "data_offset": 2048, 00:11:55.889 "data_size": 63488 00:11:55.889 }, 00:11:55.889 { 00:11:55.889 "name": "BaseBdev3", 00:11:55.889 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:55.889 "is_configured": true, 00:11:55.889 "data_offset": 2048, 00:11:55.889 "data_size": 63488 00:11:55.889 } 00:11:55.889 ] 00:11:55.889 }' 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.889 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.459 [2024-11-20 07:08:38.531328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.459 "name": "Existed_Raid", 00:11:56.459 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:56.459 "strip_size_kb": 64, 00:11:56.459 "state": "configuring", 00:11:56.459 "raid_level": "raid0", 00:11:56.459 "superblock": true, 00:11:56.459 "num_base_bdevs": 3, 00:11:56.459 "num_base_bdevs_discovered": 1, 00:11:56.459 "num_base_bdevs_operational": 3, 00:11:56.459 "base_bdevs_list": [ 00:11:56.459 { 00:11:56.459 "name": "BaseBdev1", 00:11:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.459 "is_configured": false, 00:11:56.459 "data_offset": 0, 00:11:56.459 "data_size": 0 00:11:56.459 }, 00:11:56.459 { 00:11:56.459 "name": null, 00:11:56.459 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:56.459 "is_configured": false, 00:11:56.459 "data_offset": 0, 00:11:56.459 "data_size": 63488 00:11:56.459 }, 00:11:56.459 { 00:11:56.459 "name": "BaseBdev3", 00:11:56.459 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:56.459 "is_configured": true, 00:11:56.459 "data_offset": 2048, 00:11:56.459 "data_size": 63488 00:11:56.459 } 00:11:56.459 ] 00:11:56.459 }' 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.459 07:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 [2024-11-20 07:08:39.118972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.027 BaseBdev1 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 [ 00:11:57.027 { 00:11:57.027 "name": "BaseBdev1", 00:11:57.027 "aliases": [ 00:11:57.027 "068e6de1-38de-4a80-9efd-1e6dbb04cb77" 00:11:57.027 ], 00:11:57.027 "product_name": "Malloc disk", 00:11:57.027 "block_size": 512, 00:11:57.027 "num_blocks": 65536, 00:11:57.027 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:57.027 "assigned_rate_limits": { 00:11:57.027 "rw_ios_per_sec": 0, 00:11:57.027 "rw_mbytes_per_sec": 0, 00:11:57.027 "r_mbytes_per_sec": 0, 00:11:57.027 "w_mbytes_per_sec": 0 00:11:57.027 }, 00:11:57.027 "claimed": true, 00:11:57.027 "claim_type": "exclusive_write", 00:11:57.027 "zoned": false, 00:11:57.027 "supported_io_types": { 00:11:57.027 "read": true, 00:11:57.027 "write": true, 00:11:57.027 "unmap": true, 00:11:57.027 "flush": true, 00:11:57.027 "reset": true, 00:11:57.027 "nvme_admin": false, 00:11:57.027 "nvme_io": false, 00:11:57.027 "nvme_io_md": false, 00:11:57.027 "write_zeroes": true, 00:11:57.027 "zcopy": true, 00:11:57.027 "get_zone_info": false, 00:11:57.027 "zone_management": false, 00:11:57.027 "zone_append": false, 00:11:57.027 "compare": false, 00:11:57.027 "compare_and_write": false, 00:11:57.027 "abort": true, 00:11:57.027 "seek_hole": false, 00:11:57.027 "seek_data": false, 00:11:57.027 "copy": true, 00:11:57.027 "nvme_iov_md": false 00:11:57.027 }, 00:11:57.027 "memory_domains": [ 00:11:57.027 { 00:11:57.027 "dma_device_id": "system", 00:11:57.027 "dma_device_type": 1 00:11:57.027 }, 00:11:57.027 { 00:11:57.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.027 "dma_device_type": 2 00:11:57.027 } 00:11:57.027 ], 00:11:57.027 "driver_specific": {} 00:11:57.027 } 00:11:57.027 ] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.027 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.027 "name": "Existed_Raid", 00:11:57.027 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:57.027 "strip_size_kb": 64, 00:11:57.027 "state": "configuring", 00:11:57.028 "raid_level": "raid0", 00:11:57.028 "superblock": true, 00:11:57.028 "num_base_bdevs": 3, 00:11:57.028 "num_base_bdevs_discovered": 2, 00:11:57.028 "num_base_bdevs_operational": 3, 00:11:57.028 "base_bdevs_list": [ 00:11:57.028 { 00:11:57.028 "name": "BaseBdev1", 00:11:57.028 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:57.028 "is_configured": true, 00:11:57.028 "data_offset": 2048, 00:11:57.028 "data_size": 63488 00:11:57.028 }, 00:11:57.028 { 00:11:57.028 "name": null, 00:11:57.028 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:57.028 "is_configured": false, 00:11:57.028 "data_offset": 0, 00:11:57.028 "data_size": 63488 00:11:57.028 }, 00:11:57.028 { 00:11:57.028 "name": "BaseBdev3", 00:11:57.028 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:57.028 "is_configured": true, 00:11:57.028 "data_offset": 2048, 00:11:57.028 "data_size": 63488 00:11:57.028 } 00:11:57.028 ] 00:11:57.028 }' 00:11:57.028 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.028 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.595 [2024-11-20 07:08:39.678180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.595 "name": "Existed_Raid", 00:11:57.595 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:57.595 "strip_size_kb": 64, 00:11:57.595 "state": "configuring", 00:11:57.595 "raid_level": "raid0", 00:11:57.595 "superblock": true, 00:11:57.595 "num_base_bdevs": 3, 00:11:57.595 "num_base_bdevs_discovered": 1, 00:11:57.595 "num_base_bdevs_operational": 3, 00:11:57.595 "base_bdevs_list": [ 00:11:57.595 { 00:11:57.595 "name": "BaseBdev1", 00:11:57.595 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:57.595 "is_configured": true, 00:11:57.595 "data_offset": 2048, 00:11:57.595 "data_size": 63488 00:11:57.595 }, 00:11:57.595 { 00:11:57.595 "name": null, 00:11:57.595 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:57.595 "is_configured": false, 00:11:57.595 "data_offset": 0, 00:11:57.595 "data_size": 63488 00:11:57.595 }, 00:11:57.595 { 00:11:57.595 "name": null, 00:11:57.595 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:57.595 "is_configured": false, 00:11:57.595 "data_offset": 0, 00:11:57.595 "data_size": 63488 00:11:57.595 } 00:11:57.595 ] 00:11:57.595 }' 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.595 07:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.178 [2024-11-20 07:08:40.197460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.178 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.178 "name": "Existed_Raid", 00:11:58.178 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:58.178 "strip_size_kb": 64, 00:11:58.178 "state": "configuring", 00:11:58.178 "raid_level": "raid0", 00:11:58.178 "superblock": true, 00:11:58.178 "num_base_bdevs": 3, 00:11:58.178 "num_base_bdevs_discovered": 2, 00:11:58.178 "num_base_bdevs_operational": 3, 00:11:58.178 "base_bdevs_list": [ 00:11:58.178 { 00:11:58.178 "name": "BaseBdev1", 00:11:58.178 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:58.178 "is_configured": true, 00:11:58.178 "data_offset": 2048, 00:11:58.178 "data_size": 63488 00:11:58.178 }, 00:11:58.178 { 00:11:58.178 "name": null, 00:11:58.178 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:58.178 "is_configured": false, 00:11:58.178 "data_offset": 0, 00:11:58.178 "data_size": 63488 00:11:58.178 }, 00:11:58.178 { 00:11:58.178 "name": "BaseBdev3", 00:11:58.179 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:58.179 "is_configured": true, 00:11:58.179 "data_offset": 2048, 00:11:58.179 "data_size": 63488 00:11:58.179 } 00:11:58.179 ] 00:11:58.179 }' 00:11:58.179 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.179 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.438 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.438 [2024-11-20 07:08:40.672658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.696 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.696 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.696 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.696 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.696 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.697 "name": "Existed_Raid", 00:11:58.697 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:58.697 "strip_size_kb": 64, 00:11:58.697 "state": "configuring", 00:11:58.697 "raid_level": "raid0", 00:11:58.697 "superblock": true, 00:11:58.697 "num_base_bdevs": 3, 00:11:58.697 "num_base_bdevs_discovered": 1, 00:11:58.697 "num_base_bdevs_operational": 3, 00:11:58.697 "base_bdevs_list": [ 00:11:58.697 { 00:11:58.697 "name": null, 00:11:58.697 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:58.697 "is_configured": false, 00:11:58.697 "data_offset": 0, 00:11:58.697 "data_size": 63488 00:11:58.697 }, 00:11:58.697 { 00:11:58.697 "name": null, 00:11:58.697 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:58.697 "is_configured": false, 00:11:58.697 "data_offset": 0, 00:11:58.697 "data_size": 63488 00:11:58.697 }, 00:11:58.697 { 00:11:58.697 "name": "BaseBdev3", 00:11:58.697 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:58.697 "is_configured": true, 00:11:58.697 "data_offset": 2048, 00:11:58.697 "data_size": 63488 00:11:58.697 } 00:11:58.697 ] 00:11:58.697 }' 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.697 07:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 [2024-11-20 07:08:41.303393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.264 "name": "Existed_Raid", 00:11:59.264 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:59.264 "strip_size_kb": 64, 00:11:59.264 "state": "configuring", 00:11:59.264 "raid_level": "raid0", 00:11:59.264 "superblock": true, 00:11:59.264 "num_base_bdevs": 3, 00:11:59.264 "num_base_bdevs_discovered": 2, 00:11:59.264 "num_base_bdevs_operational": 3, 00:11:59.264 "base_bdevs_list": [ 00:11:59.264 { 00:11:59.264 "name": null, 00:11:59.264 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:59.264 "is_configured": false, 00:11:59.264 "data_offset": 0, 00:11:59.264 "data_size": 63488 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "name": "BaseBdev2", 00:11:59.264 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:59.264 "is_configured": true, 00:11:59.264 "data_offset": 2048, 00:11:59.264 "data_size": 63488 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "name": "BaseBdev3", 00:11:59.264 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:59.264 "is_configured": true, 00:11:59.264 "data_offset": 2048, 00:11:59.264 "data_size": 63488 00:11:59.264 } 00:11:59.264 ] 00:11:59.264 }' 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.264 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 068e6de1-38de-4a80-9efd-1e6dbb04cb77 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.524 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 [2024-11-20 07:08:41.805415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.783 [2024-11-20 07:08:41.805746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.783 [2024-11-20 07:08:41.805769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:59.783 [2024-11-20 07:08:41.806035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:59.783 [2024-11-20 07:08:41.806191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.783 [2024-11-20 07:08:41.806201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.783 NewBaseBdev 00:11:59.783 [2024-11-20 07:08:41.806368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.783 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 [ 00:11:59.784 { 00:11:59.784 "name": "NewBaseBdev", 00:11:59.784 "aliases": [ 00:11:59.784 "068e6de1-38de-4a80-9efd-1e6dbb04cb77" 00:11:59.784 ], 00:11:59.784 "product_name": "Malloc disk", 00:11:59.784 "block_size": 512, 00:11:59.784 "num_blocks": 65536, 00:11:59.784 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:59.784 "assigned_rate_limits": { 00:11:59.784 "rw_ios_per_sec": 0, 00:11:59.784 "rw_mbytes_per_sec": 0, 00:11:59.784 "r_mbytes_per_sec": 0, 00:11:59.784 "w_mbytes_per_sec": 0 00:11:59.784 }, 00:11:59.784 "claimed": true, 00:11:59.784 "claim_type": "exclusive_write", 00:11:59.784 "zoned": false, 00:11:59.784 "supported_io_types": { 00:11:59.784 "read": true, 00:11:59.784 "write": true, 00:11:59.784 "unmap": true, 00:11:59.784 "flush": true, 00:11:59.784 "reset": true, 00:11:59.784 "nvme_admin": false, 00:11:59.784 "nvme_io": false, 00:11:59.784 "nvme_io_md": false, 00:11:59.784 "write_zeroes": true, 00:11:59.784 "zcopy": true, 00:11:59.784 "get_zone_info": false, 00:11:59.784 "zone_management": false, 00:11:59.784 "zone_append": false, 00:11:59.784 "compare": false, 00:11:59.784 "compare_and_write": false, 00:11:59.784 "abort": true, 00:11:59.784 "seek_hole": false, 00:11:59.784 "seek_data": false, 00:11:59.784 "copy": true, 00:11:59.784 "nvme_iov_md": false 00:11:59.784 }, 00:11:59.784 "memory_domains": [ 00:11:59.784 { 00:11:59.784 "dma_device_id": "system", 00:11:59.784 "dma_device_type": 1 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.784 "dma_device_type": 2 00:11:59.784 } 00:11:59.784 ], 00:11:59.784 "driver_specific": {} 00:11:59.784 } 00:11:59.784 ] 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.784 "name": "Existed_Raid", 00:11:59.784 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:11:59.784 "strip_size_kb": 64, 00:11:59.784 "state": "online", 00:11:59.784 "raid_level": "raid0", 00:11:59.784 "superblock": true, 00:11:59.784 "num_base_bdevs": 3, 00:11:59.784 "num_base_bdevs_discovered": 3, 00:11:59.784 "num_base_bdevs_operational": 3, 00:11:59.784 "base_bdevs_list": [ 00:11:59.784 { 00:11:59.784 "name": "NewBaseBdev", 00:11:59.784 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 2048, 00:11:59.784 "data_size": 63488 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev2", 00:11:59.784 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 2048, 00:11:59.784 "data_size": 63488 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev3", 00:11:59.784 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 2048, 00:11:59.784 "data_size": 63488 00:11:59.784 } 00:11:59.784 ] 00:11:59.784 }' 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.784 07:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.042 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.301 [2024-11-20 07:08:42.308970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.301 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.301 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.301 "name": "Existed_Raid", 00:12:00.301 "aliases": [ 00:12:00.301 "3bcd231a-f7d8-4c6d-ba41-fba747ba3459" 00:12:00.301 ], 00:12:00.301 "product_name": "Raid Volume", 00:12:00.301 "block_size": 512, 00:12:00.301 "num_blocks": 190464, 00:12:00.301 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:12:00.301 "assigned_rate_limits": { 00:12:00.301 "rw_ios_per_sec": 0, 00:12:00.301 "rw_mbytes_per_sec": 0, 00:12:00.301 "r_mbytes_per_sec": 0, 00:12:00.301 "w_mbytes_per_sec": 0 00:12:00.301 }, 00:12:00.301 "claimed": false, 00:12:00.301 "zoned": false, 00:12:00.301 "supported_io_types": { 00:12:00.301 "read": true, 00:12:00.301 "write": true, 00:12:00.301 "unmap": true, 00:12:00.301 "flush": true, 00:12:00.302 "reset": true, 00:12:00.302 "nvme_admin": false, 00:12:00.302 "nvme_io": false, 00:12:00.302 "nvme_io_md": false, 00:12:00.302 "write_zeroes": true, 00:12:00.302 "zcopy": false, 00:12:00.302 "get_zone_info": false, 00:12:00.302 "zone_management": false, 00:12:00.302 "zone_append": false, 00:12:00.302 "compare": false, 00:12:00.302 "compare_and_write": false, 00:12:00.302 "abort": false, 00:12:00.302 "seek_hole": false, 00:12:00.302 "seek_data": false, 00:12:00.302 "copy": false, 00:12:00.302 "nvme_iov_md": false 00:12:00.302 }, 00:12:00.302 "memory_domains": [ 00:12:00.302 { 00:12:00.302 "dma_device_id": "system", 00:12:00.302 "dma_device_type": 1 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.302 "dma_device_type": 2 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "system", 00:12:00.302 "dma_device_type": 1 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.302 "dma_device_type": 2 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "system", 00:12:00.302 "dma_device_type": 1 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.302 "dma_device_type": 2 00:12:00.302 } 00:12:00.302 ], 00:12:00.302 "driver_specific": { 00:12:00.302 "raid": { 00:12:00.302 "uuid": "3bcd231a-f7d8-4c6d-ba41-fba747ba3459", 00:12:00.302 "strip_size_kb": 64, 00:12:00.302 "state": "online", 00:12:00.302 "raid_level": "raid0", 00:12:00.302 "superblock": true, 00:12:00.302 "num_base_bdevs": 3, 00:12:00.302 "num_base_bdevs_discovered": 3, 00:12:00.302 "num_base_bdevs_operational": 3, 00:12:00.302 "base_bdevs_list": [ 00:12:00.302 { 00:12:00.302 "name": "NewBaseBdev", 00:12:00.302 "uuid": "068e6de1-38de-4a80-9efd-1e6dbb04cb77", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 2048, 00:12:00.302 "data_size": 63488 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "name": "BaseBdev2", 00:12:00.302 "uuid": "40e724a0-2f5f-4b2d-9cab-b0c773df1da6", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 2048, 00:12:00.302 "data_size": 63488 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "name": "BaseBdev3", 00:12:00.302 "uuid": "8d4a516b-54cb-436d-b1c6-bb29593c3e4c", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 2048, 00:12:00.302 "data_size": 63488 00:12:00.302 } 00:12:00.302 ] 00:12:00.302 } 00:12:00.302 } 00:12:00.302 }' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.302 BaseBdev2 00:12:00.302 BaseBdev3' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.302 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.562 [2024-11-20 07:08:42.588236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.562 [2024-11-20 07:08:42.588279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.562 [2024-11-20 07:08:42.588421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.562 [2024-11-20 07:08:42.588506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.562 [2024-11-20 07:08:42.588527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64746 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64746 ']' 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64746 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64746 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64746' 00:12:00.562 killing process with pid 64746 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64746 00:12:00.562 [2024-11-20 07:08:42.639775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.562 07:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64746 00:12:00.855 [2024-11-20 07:08:42.978654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.237 07:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.237 00:12:02.237 real 0m10.747s 00:12:02.237 user 0m17.019s 00:12:02.237 sys 0m1.850s 00:12:02.237 07:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.237 07:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.237 ************************************ 00:12:02.237 END TEST raid_state_function_test_sb 00:12:02.237 ************************************ 00:12:02.237 07:08:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:02.237 07:08:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.237 07:08:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.237 07:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.237 ************************************ 00:12:02.237 START TEST raid_superblock_test 00:12:02.237 ************************************ 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65362 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65362 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65362 ']' 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.237 07:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.237 [2024-11-20 07:08:44.288557] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:02.237 [2024-11-20 07:08:44.288671] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65362 ] 00:12:02.237 [2024-11-20 07:08:44.464894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.495 [2024-11-20 07:08:44.587887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.752 [2024-11-20 07:08:44.800912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.753 [2024-11-20 07:08:44.800962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.010 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 malloc1 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 [2024-11-20 07:08:45.212482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.011 [2024-11-20 07:08:45.212592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.011 [2024-11-20 07:08:45.212637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.011 [2024-11-20 07:08:45.212667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.011 [2024-11-20 07:08:45.215102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.011 [2024-11-20 07:08:45.215186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.011 pt1 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 malloc2 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.011 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 [2024-11-20 07:08:45.273039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.011 [2024-11-20 07:08:45.273157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.011 [2024-11-20 07:08:45.273210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.011 [2024-11-20 07:08:45.273245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.269 [2024-11-20 07:08:45.275856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.269 [2024-11-20 07:08:45.275933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.269 pt2 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.269 malloc3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.269 [2024-11-20 07:08:45.344649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:03.269 [2024-11-20 07:08:45.344757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.269 [2024-11-20 07:08:45.344831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.269 [2024-11-20 07:08:45.344872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.269 [2024-11-20 07:08:45.347314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.269 [2024-11-20 07:08:45.347410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:03.269 pt3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.269 [2024-11-20 07:08:45.356690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.269 [2024-11-20 07:08:45.358875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.269 [2024-11-20 07:08:45.358998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.269 [2024-11-20 07:08:45.359213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.269 [2024-11-20 07:08:45.359270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:03.269 [2024-11-20 07:08:45.359619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.269 [2024-11-20 07:08:45.359848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.269 [2024-11-20 07:08:45.359899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.269 [2024-11-20 07:08:45.360156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.269 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.270 "name": "raid_bdev1", 00:12:03.270 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:03.270 "strip_size_kb": 64, 00:12:03.270 "state": "online", 00:12:03.270 "raid_level": "raid0", 00:12:03.270 "superblock": true, 00:12:03.270 "num_base_bdevs": 3, 00:12:03.270 "num_base_bdevs_discovered": 3, 00:12:03.270 "num_base_bdevs_operational": 3, 00:12:03.270 "base_bdevs_list": [ 00:12:03.270 { 00:12:03.270 "name": "pt1", 00:12:03.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 2048, 00:12:03.270 "data_size": 63488 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "name": "pt2", 00:12:03.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 2048, 00:12:03.270 "data_size": 63488 00:12:03.270 }, 00:12:03.270 { 00:12:03.270 "name": "pt3", 00:12:03.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.270 "is_configured": true, 00:12:03.270 "data_offset": 2048, 00:12:03.270 "data_size": 63488 00:12:03.270 } 00:12:03.270 ] 00:12:03.270 }' 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.270 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.837 [2024-11-20 07:08:45.828190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.837 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.837 "name": "raid_bdev1", 00:12:03.837 "aliases": [ 00:12:03.837 "8e259ce6-6397-4725-925d-cf97a580e3c6" 00:12:03.837 ], 00:12:03.837 "product_name": "Raid Volume", 00:12:03.837 "block_size": 512, 00:12:03.837 "num_blocks": 190464, 00:12:03.837 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:03.837 "assigned_rate_limits": { 00:12:03.837 "rw_ios_per_sec": 0, 00:12:03.837 "rw_mbytes_per_sec": 0, 00:12:03.837 "r_mbytes_per_sec": 0, 00:12:03.837 "w_mbytes_per_sec": 0 00:12:03.837 }, 00:12:03.838 "claimed": false, 00:12:03.838 "zoned": false, 00:12:03.838 "supported_io_types": { 00:12:03.838 "read": true, 00:12:03.838 "write": true, 00:12:03.838 "unmap": true, 00:12:03.838 "flush": true, 00:12:03.838 "reset": true, 00:12:03.838 "nvme_admin": false, 00:12:03.838 "nvme_io": false, 00:12:03.838 "nvme_io_md": false, 00:12:03.838 "write_zeroes": true, 00:12:03.838 "zcopy": false, 00:12:03.838 "get_zone_info": false, 00:12:03.838 "zone_management": false, 00:12:03.838 "zone_append": false, 00:12:03.838 "compare": false, 00:12:03.838 "compare_and_write": false, 00:12:03.838 "abort": false, 00:12:03.838 "seek_hole": false, 00:12:03.838 "seek_data": false, 00:12:03.838 "copy": false, 00:12:03.838 "nvme_iov_md": false 00:12:03.838 }, 00:12:03.838 "memory_domains": [ 00:12:03.838 { 00:12:03.838 "dma_device_id": "system", 00:12:03.838 "dma_device_type": 1 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.838 "dma_device_type": 2 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "dma_device_id": "system", 00:12:03.838 "dma_device_type": 1 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.838 "dma_device_type": 2 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "dma_device_id": "system", 00:12:03.838 "dma_device_type": 1 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.838 "dma_device_type": 2 00:12:03.838 } 00:12:03.838 ], 00:12:03.838 "driver_specific": { 00:12:03.838 "raid": { 00:12:03.838 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:03.838 "strip_size_kb": 64, 00:12:03.838 "state": "online", 00:12:03.838 "raid_level": "raid0", 00:12:03.838 "superblock": true, 00:12:03.838 "num_base_bdevs": 3, 00:12:03.838 "num_base_bdevs_discovered": 3, 00:12:03.838 "num_base_bdevs_operational": 3, 00:12:03.838 "base_bdevs_list": [ 00:12:03.838 { 00:12:03.838 "name": "pt1", 00:12:03.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.838 "is_configured": true, 00:12:03.838 "data_offset": 2048, 00:12:03.838 "data_size": 63488 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "name": "pt2", 00:12:03.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.838 "is_configured": true, 00:12:03.838 "data_offset": 2048, 00:12:03.838 "data_size": 63488 00:12:03.838 }, 00:12:03.838 { 00:12:03.838 "name": "pt3", 00:12:03.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.838 "is_configured": true, 00:12:03.838 "data_offset": 2048, 00:12:03.838 "data_size": 63488 00:12:03.838 } 00:12:03.838 ] 00:12:03.838 } 00:12:03.838 } 00:12:03.838 }' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.838 pt2 00:12:03.838 pt3' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.838 07:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.838 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:04.098 [2024-11-20 07:08:46.127651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8e259ce6-6397-4725-925d-cf97a580e3c6 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8e259ce6-6397-4725-925d-cf97a580e3c6 ']' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 [2024-11-20 07:08:46.175285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.098 [2024-11-20 07:08:46.175326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.098 [2024-11-20 07:08:46.175442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.098 [2024-11-20 07:08:46.175523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.098 [2024-11-20 07:08:46.175534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 [2024-11-20 07:08:46.323099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.098 [2024-11-20 07:08:46.325183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:04.098 [2024-11-20 07:08:46.325328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:04.098 [2024-11-20 07:08:46.325404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:04.098 [2024-11-20 07:08:46.325464] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:04.098 [2024-11-20 07:08:46.325485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:04.098 [2024-11-20 07:08:46.325504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.098 [2024-11-20 07:08:46.325517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:04.098 request: 00:12:04.098 { 00:12:04.098 "name": "raid_bdev1", 00:12:04.098 "raid_level": "raid0", 00:12:04.098 "base_bdevs": [ 00:12:04.098 "malloc1", 00:12:04.098 "malloc2", 00:12:04.098 "malloc3" 00:12:04.098 ], 00:12:04.098 "strip_size_kb": 64, 00:12:04.098 "superblock": false, 00:12:04.098 "method": "bdev_raid_create", 00:12:04.098 "req_id": 1 00:12:04.098 } 00:12:04.098 Got JSON-RPC error response 00:12:04.098 response: 00:12:04.098 { 00:12:04.098 "code": -17, 00:12:04.098 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:04.098 } 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.098 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.358 [2024-11-20 07:08:46.382935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.358 [2024-11-20 07:08:46.383100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.358 [2024-11-20 07:08:46.383143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:04.358 [2024-11-20 07:08:46.383174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.358 [2024-11-20 07:08:46.385625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.358 [2024-11-20 07:08:46.385714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.358 [2024-11-20 07:08:46.385846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.358 [2024-11-20 07:08:46.385943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.358 pt1 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.358 "name": "raid_bdev1", 00:12:04.358 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:04.358 "strip_size_kb": 64, 00:12:04.358 "state": "configuring", 00:12:04.358 "raid_level": "raid0", 00:12:04.358 "superblock": true, 00:12:04.358 "num_base_bdevs": 3, 00:12:04.358 "num_base_bdevs_discovered": 1, 00:12:04.358 "num_base_bdevs_operational": 3, 00:12:04.358 "base_bdevs_list": [ 00:12:04.358 { 00:12:04.358 "name": "pt1", 00:12:04.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.358 "is_configured": true, 00:12:04.358 "data_offset": 2048, 00:12:04.358 "data_size": 63488 00:12:04.358 }, 00:12:04.358 { 00:12:04.358 "name": null, 00:12:04.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.358 "is_configured": false, 00:12:04.358 "data_offset": 2048, 00:12:04.358 "data_size": 63488 00:12:04.358 }, 00:12:04.358 { 00:12:04.358 "name": null, 00:12:04.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.358 "is_configured": false, 00:12:04.358 "data_offset": 2048, 00:12:04.358 "data_size": 63488 00:12:04.358 } 00:12:04.358 ] 00:12:04.358 }' 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.358 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.617 [2024-11-20 07:08:46.814253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.617 [2024-11-20 07:08:46.814371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.617 [2024-11-20 07:08:46.814400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:04.617 [2024-11-20 07:08:46.814411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.617 [2024-11-20 07:08:46.814922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.617 [2024-11-20 07:08:46.814942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.617 [2024-11-20 07:08:46.815049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.617 [2024-11-20 07:08:46.815070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.617 pt2 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.617 [2024-11-20 07:08:46.826205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.617 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.618 "name": "raid_bdev1", 00:12:04.618 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:04.618 "strip_size_kb": 64, 00:12:04.618 "state": "configuring", 00:12:04.618 "raid_level": "raid0", 00:12:04.618 "superblock": true, 00:12:04.618 "num_base_bdevs": 3, 00:12:04.618 "num_base_bdevs_discovered": 1, 00:12:04.618 "num_base_bdevs_operational": 3, 00:12:04.618 "base_bdevs_list": [ 00:12:04.618 { 00:12:04.618 "name": "pt1", 00:12:04.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.618 "is_configured": true, 00:12:04.618 "data_offset": 2048, 00:12:04.618 "data_size": 63488 00:12:04.618 }, 00:12:04.618 { 00:12:04.618 "name": null, 00:12:04.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.618 "is_configured": false, 00:12:04.618 "data_offset": 0, 00:12:04.618 "data_size": 63488 00:12:04.618 }, 00:12:04.618 { 00:12:04.618 "name": null, 00:12:04.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.618 "is_configured": false, 00:12:04.618 "data_offset": 2048, 00:12:04.618 "data_size": 63488 00:12:04.618 } 00:12:04.618 ] 00:12:04.618 }' 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.618 07:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.183 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:05.183 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.183 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.183 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.183 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.183 [2024-11-20 07:08:47.277506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.183 [2024-11-20 07:08:47.277660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.183 [2024-11-20 07:08:47.277702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:05.183 [2024-11-20 07:08:47.277766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.183 [2024-11-20 07:08:47.278301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.183 [2024-11-20 07:08:47.278382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.183 [2024-11-20 07:08:47.278513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.183 [2024-11-20 07:08:47.278575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.183 pt2 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.184 [2024-11-20 07:08:47.289490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.184 [2024-11-20 07:08:47.289628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.184 [2024-11-20 07:08:47.289669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.184 [2024-11-20 07:08:47.289717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.184 [2024-11-20 07:08:47.290287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.184 [2024-11-20 07:08:47.290410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.184 [2024-11-20 07:08:47.290547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:05.184 [2024-11-20 07:08:47.290612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.184 [2024-11-20 07:08:47.290775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.184 [2024-11-20 07:08:47.290820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:05.184 [2024-11-20 07:08:47.291149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.184 [2024-11-20 07:08:47.291394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.184 [2024-11-20 07:08:47.291442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.184 [2024-11-20 07:08:47.291667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.184 pt3 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.184 "name": "raid_bdev1", 00:12:05.184 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:05.184 "strip_size_kb": 64, 00:12:05.184 "state": "online", 00:12:05.184 "raid_level": "raid0", 00:12:05.184 "superblock": true, 00:12:05.184 "num_base_bdevs": 3, 00:12:05.184 "num_base_bdevs_discovered": 3, 00:12:05.184 "num_base_bdevs_operational": 3, 00:12:05.184 "base_bdevs_list": [ 00:12:05.184 { 00:12:05.184 "name": "pt1", 00:12:05.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.184 "is_configured": true, 00:12:05.184 "data_offset": 2048, 00:12:05.184 "data_size": 63488 00:12:05.184 }, 00:12:05.184 { 00:12:05.184 "name": "pt2", 00:12:05.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.184 "is_configured": true, 00:12:05.184 "data_offset": 2048, 00:12:05.184 "data_size": 63488 00:12:05.184 }, 00:12:05.184 { 00:12:05.184 "name": "pt3", 00:12:05.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.184 "is_configured": true, 00:12:05.184 "data_offset": 2048, 00:12:05.184 "data_size": 63488 00:12:05.184 } 00:12:05.184 ] 00:12:05.184 }' 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.184 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.750 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.751 [2024-11-20 07:08:47.745019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.751 "name": "raid_bdev1", 00:12:05.751 "aliases": [ 00:12:05.751 "8e259ce6-6397-4725-925d-cf97a580e3c6" 00:12:05.751 ], 00:12:05.751 "product_name": "Raid Volume", 00:12:05.751 "block_size": 512, 00:12:05.751 "num_blocks": 190464, 00:12:05.751 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:05.751 "assigned_rate_limits": { 00:12:05.751 "rw_ios_per_sec": 0, 00:12:05.751 "rw_mbytes_per_sec": 0, 00:12:05.751 "r_mbytes_per_sec": 0, 00:12:05.751 "w_mbytes_per_sec": 0 00:12:05.751 }, 00:12:05.751 "claimed": false, 00:12:05.751 "zoned": false, 00:12:05.751 "supported_io_types": { 00:12:05.751 "read": true, 00:12:05.751 "write": true, 00:12:05.751 "unmap": true, 00:12:05.751 "flush": true, 00:12:05.751 "reset": true, 00:12:05.751 "nvme_admin": false, 00:12:05.751 "nvme_io": false, 00:12:05.751 "nvme_io_md": false, 00:12:05.751 "write_zeroes": true, 00:12:05.751 "zcopy": false, 00:12:05.751 "get_zone_info": false, 00:12:05.751 "zone_management": false, 00:12:05.751 "zone_append": false, 00:12:05.751 "compare": false, 00:12:05.751 "compare_and_write": false, 00:12:05.751 "abort": false, 00:12:05.751 "seek_hole": false, 00:12:05.751 "seek_data": false, 00:12:05.751 "copy": false, 00:12:05.751 "nvme_iov_md": false 00:12:05.751 }, 00:12:05.751 "memory_domains": [ 00:12:05.751 { 00:12:05.751 "dma_device_id": "system", 00:12:05.751 "dma_device_type": 1 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.751 "dma_device_type": 2 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "dma_device_id": "system", 00:12:05.751 "dma_device_type": 1 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.751 "dma_device_type": 2 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "dma_device_id": "system", 00:12:05.751 "dma_device_type": 1 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.751 "dma_device_type": 2 00:12:05.751 } 00:12:05.751 ], 00:12:05.751 "driver_specific": { 00:12:05.751 "raid": { 00:12:05.751 "uuid": "8e259ce6-6397-4725-925d-cf97a580e3c6", 00:12:05.751 "strip_size_kb": 64, 00:12:05.751 "state": "online", 00:12:05.751 "raid_level": "raid0", 00:12:05.751 "superblock": true, 00:12:05.751 "num_base_bdevs": 3, 00:12:05.751 "num_base_bdevs_discovered": 3, 00:12:05.751 "num_base_bdevs_operational": 3, 00:12:05.751 "base_bdevs_list": [ 00:12:05.751 { 00:12:05.751 "name": "pt1", 00:12:05.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.751 "is_configured": true, 00:12:05.751 "data_offset": 2048, 00:12:05.751 "data_size": 63488 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "name": "pt2", 00:12:05.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.751 "is_configured": true, 00:12:05.751 "data_offset": 2048, 00:12:05.751 "data_size": 63488 00:12:05.751 }, 00:12:05.751 { 00:12:05.751 "name": "pt3", 00:12:05.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.751 "is_configured": true, 00:12:05.751 "data_offset": 2048, 00:12:05.751 "data_size": 63488 00:12:05.751 } 00:12:05.751 ] 00:12:05.751 } 00:12:05.751 } 00:12:05.751 }' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:05.751 pt2 00:12:05.751 pt3' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.751 07:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.751 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.752 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.010 [2024-11-20 07:08:48.024576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8e259ce6-6397-4725-925d-cf97a580e3c6 '!=' 8e259ce6-6397-4725-925d-cf97a580e3c6 ']' 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65362 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65362 ']' 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65362 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65362 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65362' 00:12:06.010 killing process with pid 65362 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65362 00:12:06.010 07:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65362 00:12:06.010 [2024-11-20 07:08:48.107100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.010 [2024-11-20 07:08:48.107213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.010 [2024-11-20 07:08:48.107343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.010 [2024-11-20 07:08:48.107397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:06.268 [2024-11-20 07:08:48.421055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.641 07:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:07.641 00:12:07.641 real 0m5.384s 00:12:07.641 user 0m7.718s 00:12:07.641 sys 0m0.905s 00:12:07.641 07:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.641 07:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.641 ************************************ 00:12:07.641 END TEST raid_superblock_test 00:12:07.641 ************************************ 00:12:07.641 07:08:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:07.641 07:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.641 07:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.641 07:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.641 ************************************ 00:12:07.641 START TEST raid_read_error_test 00:12:07.641 ************************************ 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dBerJlH7Tz 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65625 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65625 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65625 ']' 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.641 07:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.641 [2024-11-20 07:08:49.757795] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:07.642 [2024-11-20 07:08:49.757937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65625 ] 00:12:07.899 [2024-11-20 07:08:49.915758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.899 [2024-11-20 07:08:50.033950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.157 [2024-11-20 07:08:50.239809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.157 [2024-11-20 07:08:50.239875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.416 BaseBdev1_malloc 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.416 true 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.416 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.416 [2024-11-20 07:08:50.678831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.416 [2024-11-20 07:08:50.678960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.416 [2024-11-20 07:08:50.679008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:08.416 [2024-11-20 07:08:50.679023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.676 [2024-11-20 07:08:50.681522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.676 [2024-11-20 07:08:50.681560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.676 BaseBdev1 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 BaseBdev2_malloc 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 true 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 [2024-11-20 07:08:50.746112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:08.676 [2024-11-20 07:08:50.746230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.676 [2024-11-20 07:08:50.746281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.676 [2024-11-20 07:08:50.746318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.676 [2024-11-20 07:08:50.748764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.676 [2024-11-20 07:08:50.748849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.676 BaseBdev2 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.676 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.677 BaseBdev3_malloc 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.677 true 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.677 [2024-11-20 07:08:50.830719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:08.677 [2024-11-20 07:08:50.830846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.677 [2024-11-20 07:08:50.830876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:08.677 [2024-11-20 07:08:50.830888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.677 [2024-11-20 07:08:50.833315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.677 [2024-11-20 07:08:50.833368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:08.677 BaseBdev3 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.677 [2024-11-20 07:08:50.842807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.677 [2024-11-20 07:08:50.844791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.677 [2024-11-20 07:08:50.844874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.677 [2024-11-20 07:08:50.845082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:08.677 [2024-11-20 07:08:50.845095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:08.677 [2024-11-20 07:08:50.845455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:08.677 [2024-11-20 07:08:50.845679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:08.677 [2024-11-20 07:08:50.845703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:08.677 [2024-11-20 07:08:50.845902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.677 "name": "raid_bdev1", 00:12:08.677 "uuid": "4db43d97-a079-4bcd-b777-d6fc37abbee3", 00:12:08.677 "strip_size_kb": 64, 00:12:08.677 "state": "online", 00:12:08.677 "raid_level": "raid0", 00:12:08.677 "superblock": true, 00:12:08.677 "num_base_bdevs": 3, 00:12:08.677 "num_base_bdevs_discovered": 3, 00:12:08.677 "num_base_bdevs_operational": 3, 00:12:08.677 "base_bdevs_list": [ 00:12:08.677 { 00:12:08.677 "name": "BaseBdev1", 00:12:08.677 "uuid": "c564b260-a039-5376-b477-16f690b7ceaf", 00:12:08.677 "is_configured": true, 00:12:08.677 "data_offset": 2048, 00:12:08.677 "data_size": 63488 00:12:08.677 }, 00:12:08.677 { 00:12:08.677 "name": "BaseBdev2", 00:12:08.677 "uuid": "96bf3cae-9478-5825-8969-23bf7482317e", 00:12:08.677 "is_configured": true, 00:12:08.677 "data_offset": 2048, 00:12:08.677 "data_size": 63488 00:12:08.677 }, 00:12:08.677 { 00:12:08.677 "name": "BaseBdev3", 00:12:08.677 "uuid": "b5208c6a-1ade-51b6-9711-e13c9dc6b963", 00:12:08.677 "is_configured": true, 00:12:08.677 "data_offset": 2048, 00:12:08.677 "data_size": 63488 00:12:08.677 } 00:12:08.677 ] 00:12:08.677 }' 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.677 07:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.245 07:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.245 07:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:09.245 [2024-11-20 07:08:51.375280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.182 "name": "raid_bdev1", 00:12:10.182 "uuid": "4db43d97-a079-4bcd-b777-d6fc37abbee3", 00:12:10.182 "strip_size_kb": 64, 00:12:10.182 "state": "online", 00:12:10.182 "raid_level": "raid0", 00:12:10.182 "superblock": true, 00:12:10.182 "num_base_bdevs": 3, 00:12:10.182 "num_base_bdevs_discovered": 3, 00:12:10.182 "num_base_bdevs_operational": 3, 00:12:10.182 "base_bdevs_list": [ 00:12:10.182 { 00:12:10.182 "name": "BaseBdev1", 00:12:10.182 "uuid": "c564b260-a039-5376-b477-16f690b7ceaf", 00:12:10.182 "is_configured": true, 00:12:10.182 "data_offset": 2048, 00:12:10.182 "data_size": 63488 00:12:10.182 }, 00:12:10.182 { 00:12:10.182 "name": "BaseBdev2", 00:12:10.182 "uuid": "96bf3cae-9478-5825-8969-23bf7482317e", 00:12:10.182 "is_configured": true, 00:12:10.182 "data_offset": 2048, 00:12:10.182 "data_size": 63488 00:12:10.182 }, 00:12:10.182 { 00:12:10.182 "name": "BaseBdev3", 00:12:10.182 "uuid": "b5208c6a-1ade-51b6-9711-e13c9dc6b963", 00:12:10.182 "is_configured": true, 00:12:10.182 "data_offset": 2048, 00:12:10.182 "data_size": 63488 00:12:10.182 } 00:12:10.182 ] 00:12:10.182 }' 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.182 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.750 [2024-11-20 07:08:52.736005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.750 [2024-11-20 07:08:52.736039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.750 [2024-11-20 07:08:52.738913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.750 [2024-11-20 07:08:52.738959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.750 [2024-11-20 07:08:52.738997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.750 [2024-11-20 07:08:52.739007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:10.750 { 00:12:10.750 "results": [ 00:12:10.750 { 00:12:10.750 "job": "raid_bdev1", 00:12:10.750 "core_mask": "0x1", 00:12:10.750 "workload": "randrw", 00:12:10.750 "percentage": 50, 00:12:10.750 "status": "finished", 00:12:10.750 "queue_depth": 1, 00:12:10.750 "io_size": 131072, 00:12:10.750 "runtime": 1.361335, 00:12:10.750 "iops": 14454.19386117304, 00:12:10.750 "mibps": 1806.77423264663, 00:12:10.750 "io_failed": 1, 00:12:10.750 "io_timeout": 0, 00:12:10.750 "avg_latency_us": 96.24437886656392, 00:12:10.750 "min_latency_us": 22.91703056768559, 00:12:10.750 "max_latency_us": 1760.0279475982534 00:12:10.750 } 00:12:10.750 ], 00:12:10.750 "core_count": 1 00:12:10.750 } 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65625 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65625 ']' 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65625 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65625 00:12:10.750 killing process with pid 65625 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65625' 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65625 00:12:10.750 [2024-11-20 07:08:52.770477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.750 07:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65625 00:12:11.009 [2024-11-20 07:08:53.020101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dBerJlH7Tz 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:12.386 ************************************ 00:12:12.386 END TEST raid_read_error_test 00:12:12.386 ************************************ 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:12.386 00:12:12.386 real 0m4.603s 00:12:12.386 user 0m5.467s 00:12:12.386 sys 0m0.543s 00:12:12.386 07:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.387 07:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 07:08:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:12.387 07:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.387 07:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.387 07:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 ************************************ 00:12:12.387 START TEST raid_write_error_test 00:12:12.387 ************************************ 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qouO5DbbjP 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65766 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65766 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65766 ']' 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.387 07:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 [2024-11-20 07:08:54.423341] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:12.387 [2024-11-20 07:08:54.423899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65766 ] 00:12:12.387 [2024-11-20 07:08:54.579142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.646 [2024-11-20 07:08:54.703159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.905 [2024-11-20 07:08:54.915338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.905 [2024-11-20 07:08:54.915502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 BaseBdev1_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 true 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 [2024-11-20 07:08:55.370817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.180 [2024-11-20 07:08:55.370926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.180 [2024-11-20 07:08:55.370955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.180 [2024-11-20 07:08:55.370967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.180 [2024-11-20 07:08:55.373362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.180 [2024-11-20 07:08:55.373412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.180 BaseBdev1 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 BaseBdev2_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 true 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.180 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.180 [2024-11-20 07:08:55.438986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.180 [2024-11-20 07:08:55.439114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.180 [2024-11-20 07:08:55.439139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.180 [2024-11-20 07:08:55.439151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.440 [2024-11-20 07:08:55.441644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.440 [2024-11-20 07:08:55.441689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.440 BaseBdev2 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.440 BaseBdev3_malloc 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.440 true 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.440 [2024-11-20 07:08:55.518295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.440 [2024-11-20 07:08:55.518365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.440 [2024-11-20 07:08:55.518385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.440 [2024-11-20 07:08:55.518397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.440 [2024-11-20 07:08:55.520646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.440 [2024-11-20 07:08:55.520741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.440 BaseBdev3 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.440 [2024-11-20 07:08:55.530362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.440 [2024-11-20 07:08:55.532239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.440 [2024-11-20 07:08:55.532331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.440 [2024-11-20 07:08:55.532561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:13.440 [2024-11-20 07:08:55.532577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:13.440 [2024-11-20 07:08:55.532852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:13.440 [2024-11-20 07:08:55.533024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:13.440 [2024-11-20 07:08:55.533039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:13.440 [2024-11-20 07:08:55.533205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.440 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.441 "name": "raid_bdev1", 00:12:13.441 "uuid": "3a8f001d-2cb0-44bb-931e-2d7183271969", 00:12:13.441 "strip_size_kb": 64, 00:12:13.441 "state": "online", 00:12:13.441 "raid_level": "raid0", 00:12:13.441 "superblock": true, 00:12:13.441 "num_base_bdevs": 3, 00:12:13.441 "num_base_bdevs_discovered": 3, 00:12:13.441 "num_base_bdevs_operational": 3, 00:12:13.441 "base_bdevs_list": [ 00:12:13.441 { 00:12:13.441 "name": "BaseBdev1", 00:12:13.441 "uuid": "694e9184-9e50-5494-ad3a-61c0ebb1e499", 00:12:13.441 "is_configured": true, 00:12:13.441 "data_offset": 2048, 00:12:13.441 "data_size": 63488 00:12:13.441 }, 00:12:13.441 { 00:12:13.441 "name": "BaseBdev2", 00:12:13.441 "uuid": "fbc870df-9e31-5785-981b-ad0906288ad7", 00:12:13.441 "is_configured": true, 00:12:13.441 "data_offset": 2048, 00:12:13.441 "data_size": 63488 00:12:13.441 }, 00:12:13.441 { 00:12:13.441 "name": "BaseBdev3", 00:12:13.441 "uuid": "5f08c16e-f78d-5cea-bfe4-d4aae2cb732c", 00:12:13.441 "is_configured": true, 00:12:13.441 "data_offset": 2048, 00:12:13.441 "data_size": 63488 00:12:13.441 } 00:12:13.441 ] 00:12:13.441 }' 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.441 07:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.008 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:14.008 07:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.008 [2024-11-20 07:08:56.095009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:14.948 07:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:14.948 07:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.948 "name": "raid_bdev1", 00:12:14.948 "uuid": "3a8f001d-2cb0-44bb-931e-2d7183271969", 00:12:14.948 "strip_size_kb": 64, 00:12:14.948 "state": "online", 00:12:14.948 "raid_level": "raid0", 00:12:14.948 "superblock": true, 00:12:14.948 "num_base_bdevs": 3, 00:12:14.948 "num_base_bdevs_discovered": 3, 00:12:14.948 "num_base_bdevs_operational": 3, 00:12:14.948 "base_bdevs_list": [ 00:12:14.948 { 00:12:14.948 "name": "BaseBdev1", 00:12:14.948 "uuid": "694e9184-9e50-5494-ad3a-61c0ebb1e499", 00:12:14.948 "is_configured": true, 00:12:14.948 "data_offset": 2048, 00:12:14.948 "data_size": 63488 00:12:14.948 }, 00:12:14.948 { 00:12:14.948 "name": "BaseBdev2", 00:12:14.948 "uuid": "fbc870df-9e31-5785-981b-ad0906288ad7", 00:12:14.948 "is_configured": true, 00:12:14.948 "data_offset": 2048, 00:12:14.948 "data_size": 63488 00:12:14.948 }, 00:12:14.948 { 00:12:14.948 "name": "BaseBdev3", 00:12:14.948 "uuid": "5f08c16e-f78d-5cea-bfe4-d4aae2cb732c", 00:12:14.948 "is_configured": true, 00:12:14.948 "data_offset": 2048, 00:12:14.948 "data_size": 63488 00:12:14.948 } 00:12:14.948 ] 00:12:14.948 }' 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.948 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.207 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.207 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.207 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.207 [2024-11-20 07:08:57.467493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.207 [2024-11-20 07:08:57.467526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.466 [2024-11-20 07:08:57.470478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.466 [2024-11-20 07:08:57.470523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.466 [2024-11-20 07:08:57.470562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.466 [2024-11-20 07:08:57.470571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:15.466 { 00:12:15.466 "results": [ 00:12:15.466 { 00:12:15.466 "job": "raid_bdev1", 00:12:15.466 "core_mask": "0x1", 00:12:15.466 "workload": "randrw", 00:12:15.466 "percentage": 50, 00:12:15.466 "status": "finished", 00:12:15.466 "queue_depth": 1, 00:12:15.466 "io_size": 131072, 00:12:15.466 "runtime": 1.373161, 00:12:15.466 "iops": 14589.694871905043, 00:12:15.466 "mibps": 1823.7118589881304, 00:12:15.466 "io_failed": 1, 00:12:15.466 "io_timeout": 0, 00:12:15.466 "avg_latency_us": 95.309255614901, 00:12:15.466 "min_latency_us": 21.240174672489083, 00:12:15.466 "max_latency_us": 1452.380786026201 00:12:15.466 } 00:12:15.466 ], 00:12:15.466 "core_count": 1 00:12:15.466 } 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65766 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65766 ']' 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65766 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65766 00:12:15.466 killing process with pid 65766 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65766' 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65766 00:12:15.466 [2024-11-20 07:08:57.516916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.466 07:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65766 00:12:15.726 [2024-11-20 07:08:57.768728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.107 07:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qouO5DbbjP 00:12:17.107 07:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.107 07:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:17.107 ************************************ 00:12:17.107 END TEST raid_write_error_test 00:12:17.107 ************************************ 00:12:17.107 00:12:17.107 real 0m4.690s 00:12:17.107 user 0m5.642s 00:12:17.107 sys 0m0.540s 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.107 07:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.107 07:08:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:17.107 07:08:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:17.107 07:08:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:17.107 07:08:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.107 07:08:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.107 ************************************ 00:12:17.107 START TEST raid_state_function_test 00:12:17.107 ************************************ 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65910 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65910' 00:12:17.107 Process raid pid: 65910 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65910 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65910 ']' 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.107 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.107 [2024-11-20 07:08:59.177442] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:17.107 [2024-11-20 07:08:59.177646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.107 [2024-11-20 07:08:59.353034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.367 [2024-11-20 07:08:59.479928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.626 [2024-11-20 07:08:59.703899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.626 [2024-11-20 07:08:59.703948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.886 [2024-11-20 07:09:00.051298] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.886 [2024-11-20 07:09:00.051441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.886 [2024-11-20 07:09:00.051478] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.886 [2024-11-20 07:09:00.051507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.886 [2024-11-20 07:09:00.051528] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.886 [2024-11-20 07:09:00.051552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.886 "name": "Existed_Raid", 00:12:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.886 "strip_size_kb": 64, 00:12:17.886 "state": "configuring", 00:12:17.886 "raid_level": "concat", 00:12:17.886 "superblock": false, 00:12:17.886 "num_base_bdevs": 3, 00:12:17.886 "num_base_bdevs_discovered": 0, 00:12:17.886 "num_base_bdevs_operational": 3, 00:12:17.886 "base_bdevs_list": [ 00:12:17.886 { 00:12:17.886 "name": "BaseBdev1", 00:12:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.886 "is_configured": false, 00:12:17.886 "data_offset": 0, 00:12:17.886 "data_size": 0 00:12:17.886 }, 00:12:17.886 { 00:12:17.886 "name": "BaseBdev2", 00:12:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.886 "is_configured": false, 00:12:17.886 "data_offset": 0, 00:12:17.886 "data_size": 0 00:12:17.886 }, 00:12:17.886 { 00:12:17.886 "name": "BaseBdev3", 00:12:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.886 "is_configured": false, 00:12:17.886 "data_offset": 0, 00:12:17.886 "data_size": 0 00:12:17.886 } 00:12:17.886 ] 00:12:17.886 }' 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.886 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 [2024-11-20 07:09:00.462562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.455 [2024-11-20 07:09:00.462605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 [2024-11-20 07:09:00.474526] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.455 [2024-11-20 07:09:00.474575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.455 [2024-11-20 07:09:00.474586] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.455 [2024-11-20 07:09:00.474597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.455 [2024-11-20 07:09:00.474604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.455 [2024-11-20 07:09:00.474614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 [2024-11-20 07:09:00.525724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.455 BaseBdev1 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.455 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.455 [ 00:12:18.455 { 00:12:18.455 "name": "BaseBdev1", 00:12:18.455 "aliases": [ 00:12:18.455 "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8" 00:12:18.455 ], 00:12:18.455 "product_name": "Malloc disk", 00:12:18.455 "block_size": 512, 00:12:18.455 "num_blocks": 65536, 00:12:18.455 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:18.455 "assigned_rate_limits": { 00:12:18.455 "rw_ios_per_sec": 0, 00:12:18.455 "rw_mbytes_per_sec": 0, 00:12:18.455 "r_mbytes_per_sec": 0, 00:12:18.455 "w_mbytes_per_sec": 0 00:12:18.455 }, 00:12:18.455 "claimed": true, 00:12:18.455 "claim_type": "exclusive_write", 00:12:18.455 "zoned": false, 00:12:18.455 "supported_io_types": { 00:12:18.455 "read": true, 00:12:18.455 "write": true, 00:12:18.455 "unmap": true, 00:12:18.455 "flush": true, 00:12:18.455 "reset": true, 00:12:18.455 "nvme_admin": false, 00:12:18.455 "nvme_io": false, 00:12:18.455 "nvme_io_md": false, 00:12:18.455 "write_zeroes": true, 00:12:18.455 "zcopy": true, 00:12:18.455 "get_zone_info": false, 00:12:18.455 "zone_management": false, 00:12:18.455 "zone_append": false, 00:12:18.455 "compare": false, 00:12:18.455 "compare_and_write": false, 00:12:18.455 "abort": true, 00:12:18.455 "seek_hole": false, 00:12:18.455 "seek_data": false, 00:12:18.455 "copy": true, 00:12:18.456 "nvme_iov_md": false 00:12:18.456 }, 00:12:18.456 "memory_domains": [ 00:12:18.456 { 00:12:18.456 "dma_device_id": "system", 00:12:18.456 "dma_device_type": 1 00:12:18.456 }, 00:12:18.456 { 00:12:18.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.456 "dma_device_type": 2 00:12:18.456 } 00:12:18.456 ], 00:12:18.456 "driver_specific": {} 00:12:18.456 } 00:12:18.456 ] 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.456 "name": "Existed_Raid", 00:12:18.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.456 "strip_size_kb": 64, 00:12:18.456 "state": "configuring", 00:12:18.456 "raid_level": "concat", 00:12:18.456 "superblock": false, 00:12:18.456 "num_base_bdevs": 3, 00:12:18.456 "num_base_bdevs_discovered": 1, 00:12:18.456 "num_base_bdevs_operational": 3, 00:12:18.456 "base_bdevs_list": [ 00:12:18.456 { 00:12:18.456 "name": "BaseBdev1", 00:12:18.456 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:18.456 "is_configured": true, 00:12:18.456 "data_offset": 0, 00:12:18.456 "data_size": 65536 00:12:18.456 }, 00:12:18.456 { 00:12:18.456 "name": "BaseBdev2", 00:12:18.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.456 "is_configured": false, 00:12:18.456 "data_offset": 0, 00:12:18.456 "data_size": 0 00:12:18.456 }, 00:12:18.456 { 00:12:18.456 "name": "BaseBdev3", 00:12:18.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.456 "is_configured": false, 00:12:18.456 "data_offset": 0, 00:12:18.456 "data_size": 0 00:12:18.456 } 00:12:18.456 ] 00:12:18.456 }' 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.456 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.025 [2024-11-20 07:09:01.033166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.025 [2024-11-20 07:09:01.033318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.025 [2024-11-20 07:09:01.045189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.025 [2024-11-20 07:09:01.047355] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.025 [2024-11-20 07:09:01.047399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.025 [2024-11-20 07:09:01.047412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.025 [2024-11-20 07:09:01.047422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.025 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.026 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.026 "name": "Existed_Raid", 00:12:19.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.026 "strip_size_kb": 64, 00:12:19.026 "state": "configuring", 00:12:19.026 "raid_level": "concat", 00:12:19.026 "superblock": false, 00:12:19.026 "num_base_bdevs": 3, 00:12:19.026 "num_base_bdevs_discovered": 1, 00:12:19.026 "num_base_bdevs_operational": 3, 00:12:19.026 "base_bdevs_list": [ 00:12:19.026 { 00:12:19.026 "name": "BaseBdev1", 00:12:19.026 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:19.026 "is_configured": true, 00:12:19.026 "data_offset": 0, 00:12:19.026 "data_size": 65536 00:12:19.026 }, 00:12:19.026 { 00:12:19.026 "name": "BaseBdev2", 00:12:19.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.026 "is_configured": false, 00:12:19.026 "data_offset": 0, 00:12:19.026 "data_size": 0 00:12:19.026 }, 00:12:19.026 { 00:12:19.026 "name": "BaseBdev3", 00:12:19.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.026 "is_configured": false, 00:12:19.026 "data_offset": 0, 00:12:19.026 "data_size": 0 00:12:19.026 } 00:12:19.026 ] 00:12:19.026 }' 00:12:19.026 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.026 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.285 [2024-11-20 07:09:01.507419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.285 BaseBdev2 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.285 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.286 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.286 [ 00:12:19.286 { 00:12:19.286 "name": "BaseBdev2", 00:12:19.286 "aliases": [ 00:12:19.286 "1aa06a97-d98d-4346-b432-279671b3ea4a" 00:12:19.286 ], 00:12:19.286 "product_name": "Malloc disk", 00:12:19.286 "block_size": 512, 00:12:19.286 "num_blocks": 65536, 00:12:19.286 "uuid": "1aa06a97-d98d-4346-b432-279671b3ea4a", 00:12:19.286 "assigned_rate_limits": { 00:12:19.286 "rw_ios_per_sec": 0, 00:12:19.286 "rw_mbytes_per_sec": 0, 00:12:19.286 "r_mbytes_per_sec": 0, 00:12:19.286 "w_mbytes_per_sec": 0 00:12:19.286 }, 00:12:19.286 "claimed": true, 00:12:19.286 "claim_type": "exclusive_write", 00:12:19.286 "zoned": false, 00:12:19.286 "supported_io_types": { 00:12:19.286 "read": true, 00:12:19.286 "write": true, 00:12:19.286 "unmap": true, 00:12:19.286 "flush": true, 00:12:19.286 "reset": true, 00:12:19.286 "nvme_admin": false, 00:12:19.286 "nvme_io": false, 00:12:19.286 "nvme_io_md": false, 00:12:19.286 "write_zeroes": true, 00:12:19.286 "zcopy": true, 00:12:19.286 "get_zone_info": false, 00:12:19.286 "zone_management": false, 00:12:19.286 "zone_append": false, 00:12:19.286 "compare": false, 00:12:19.286 "compare_and_write": false, 00:12:19.286 "abort": true, 00:12:19.286 "seek_hole": false, 00:12:19.286 "seek_data": false, 00:12:19.286 "copy": true, 00:12:19.286 "nvme_iov_md": false 00:12:19.286 }, 00:12:19.286 "memory_domains": [ 00:12:19.286 { 00:12:19.286 "dma_device_id": "system", 00:12:19.286 "dma_device_type": 1 00:12:19.286 }, 00:12:19.286 { 00:12:19.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.286 "dma_device_type": 2 00:12:19.286 } 00:12:19.286 ], 00:12:19.545 "driver_specific": {} 00:12:19.545 } 00:12:19.545 ] 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.545 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.546 "name": "Existed_Raid", 00:12:19.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.546 "strip_size_kb": 64, 00:12:19.546 "state": "configuring", 00:12:19.546 "raid_level": "concat", 00:12:19.546 "superblock": false, 00:12:19.546 "num_base_bdevs": 3, 00:12:19.546 "num_base_bdevs_discovered": 2, 00:12:19.546 "num_base_bdevs_operational": 3, 00:12:19.546 "base_bdevs_list": [ 00:12:19.546 { 00:12:19.546 "name": "BaseBdev1", 00:12:19.546 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:19.546 "is_configured": true, 00:12:19.546 "data_offset": 0, 00:12:19.546 "data_size": 65536 00:12:19.546 }, 00:12:19.546 { 00:12:19.546 "name": "BaseBdev2", 00:12:19.546 "uuid": "1aa06a97-d98d-4346-b432-279671b3ea4a", 00:12:19.546 "is_configured": true, 00:12:19.546 "data_offset": 0, 00:12:19.546 "data_size": 65536 00:12:19.546 }, 00:12:19.546 { 00:12:19.546 "name": "BaseBdev3", 00:12:19.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.546 "is_configured": false, 00:12:19.546 "data_offset": 0, 00:12:19.546 "data_size": 0 00:12:19.546 } 00:12:19.546 ] 00:12:19.546 }' 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.546 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.805 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.805 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 [2024-11-20 07:09:02.059820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.805 [2024-11-20 07:09:02.059871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:19.805 [2024-11-20 07:09:02.059884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:19.805 [2024-11-20 07:09:02.060158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.805 [2024-11-20 07:09:02.060325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:19.806 [2024-11-20 07:09:02.060336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:19.806 [2024-11-20 07:09:02.060663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.806 BaseBdev3 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.064 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.064 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.064 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.064 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.064 [ 00:12:20.064 { 00:12:20.064 "name": "BaseBdev3", 00:12:20.064 "aliases": [ 00:12:20.064 "920f48c5-8ccb-40fc-ac31-0ab93c388a85" 00:12:20.064 ], 00:12:20.064 "product_name": "Malloc disk", 00:12:20.064 "block_size": 512, 00:12:20.064 "num_blocks": 65536, 00:12:20.064 "uuid": "920f48c5-8ccb-40fc-ac31-0ab93c388a85", 00:12:20.064 "assigned_rate_limits": { 00:12:20.064 "rw_ios_per_sec": 0, 00:12:20.064 "rw_mbytes_per_sec": 0, 00:12:20.064 "r_mbytes_per_sec": 0, 00:12:20.064 "w_mbytes_per_sec": 0 00:12:20.064 }, 00:12:20.064 "claimed": true, 00:12:20.064 "claim_type": "exclusive_write", 00:12:20.064 "zoned": false, 00:12:20.064 "supported_io_types": { 00:12:20.064 "read": true, 00:12:20.064 "write": true, 00:12:20.064 "unmap": true, 00:12:20.064 "flush": true, 00:12:20.064 "reset": true, 00:12:20.064 "nvme_admin": false, 00:12:20.064 "nvme_io": false, 00:12:20.064 "nvme_io_md": false, 00:12:20.064 "write_zeroes": true, 00:12:20.064 "zcopy": true, 00:12:20.064 "get_zone_info": false, 00:12:20.064 "zone_management": false, 00:12:20.064 "zone_append": false, 00:12:20.064 "compare": false, 00:12:20.064 "compare_and_write": false, 00:12:20.064 "abort": true, 00:12:20.064 "seek_hole": false, 00:12:20.064 "seek_data": false, 00:12:20.064 "copy": true, 00:12:20.064 "nvme_iov_md": false 00:12:20.064 }, 00:12:20.064 "memory_domains": [ 00:12:20.064 { 00:12:20.064 "dma_device_id": "system", 00:12:20.064 "dma_device_type": 1 00:12:20.064 }, 00:12:20.064 { 00:12:20.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.064 "dma_device_type": 2 00:12:20.064 } 00:12:20.064 ], 00:12:20.064 "driver_specific": {} 00:12:20.064 } 00:12:20.064 ] 00:12:20.064 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.065 "name": "Existed_Raid", 00:12:20.065 "uuid": "0d360b7b-51f3-4991-ab2e-8da7be485474", 00:12:20.065 "strip_size_kb": 64, 00:12:20.065 "state": "online", 00:12:20.065 "raid_level": "concat", 00:12:20.065 "superblock": false, 00:12:20.065 "num_base_bdevs": 3, 00:12:20.065 "num_base_bdevs_discovered": 3, 00:12:20.065 "num_base_bdevs_operational": 3, 00:12:20.065 "base_bdevs_list": [ 00:12:20.065 { 00:12:20.065 "name": "BaseBdev1", 00:12:20.065 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:20.065 "is_configured": true, 00:12:20.065 "data_offset": 0, 00:12:20.065 "data_size": 65536 00:12:20.065 }, 00:12:20.065 { 00:12:20.065 "name": "BaseBdev2", 00:12:20.065 "uuid": "1aa06a97-d98d-4346-b432-279671b3ea4a", 00:12:20.065 "is_configured": true, 00:12:20.065 "data_offset": 0, 00:12:20.065 "data_size": 65536 00:12:20.065 }, 00:12:20.065 { 00:12:20.065 "name": "BaseBdev3", 00:12:20.065 "uuid": "920f48c5-8ccb-40fc-ac31-0ab93c388a85", 00:12:20.065 "is_configured": true, 00:12:20.065 "data_offset": 0, 00:12:20.065 "data_size": 65536 00:12:20.065 } 00:12:20.065 ] 00:12:20.065 }' 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.065 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.631 [2024-11-20 07:09:02.599369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.631 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.631 "name": "Existed_Raid", 00:12:20.631 "aliases": [ 00:12:20.631 "0d360b7b-51f3-4991-ab2e-8da7be485474" 00:12:20.631 ], 00:12:20.631 "product_name": "Raid Volume", 00:12:20.631 "block_size": 512, 00:12:20.631 "num_blocks": 196608, 00:12:20.631 "uuid": "0d360b7b-51f3-4991-ab2e-8da7be485474", 00:12:20.631 "assigned_rate_limits": { 00:12:20.631 "rw_ios_per_sec": 0, 00:12:20.631 "rw_mbytes_per_sec": 0, 00:12:20.631 "r_mbytes_per_sec": 0, 00:12:20.631 "w_mbytes_per_sec": 0 00:12:20.631 }, 00:12:20.631 "claimed": false, 00:12:20.631 "zoned": false, 00:12:20.631 "supported_io_types": { 00:12:20.631 "read": true, 00:12:20.631 "write": true, 00:12:20.631 "unmap": true, 00:12:20.631 "flush": true, 00:12:20.631 "reset": true, 00:12:20.631 "nvme_admin": false, 00:12:20.631 "nvme_io": false, 00:12:20.631 "nvme_io_md": false, 00:12:20.631 "write_zeroes": true, 00:12:20.631 "zcopy": false, 00:12:20.631 "get_zone_info": false, 00:12:20.631 "zone_management": false, 00:12:20.631 "zone_append": false, 00:12:20.631 "compare": false, 00:12:20.631 "compare_and_write": false, 00:12:20.631 "abort": false, 00:12:20.631 "seek_hole": false, 00:12:20.631 "seek_data": false, 00:12:20.631 "copy": false, 00:12:20.631 "nvme_iov_md": false 00:12:20.631 }, 00:12:20.631 "memory_domains": [ 00:12:20.631 { 00:12:20.631 "dma_device_id": "system", 00:12:20.631 "dma_device_type": 1 00:12:20.631 }, 00:12:20.631 { 00:12:20.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.631 "dma_device_type": 2 00:12:20.631 }, 00:12:20.631 { 00:12:20.631 "dma_device_id": "system", 00:12:20.631 "dma_device_type": 1 00:12:20.631 }, 00:12:20.631 { 00:12:20.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.631 "dma_device_type": 2 00:12:20.631 }, 00:12:20.631 { 00:12:20.631 "dma_device_id": "system", 00:12:20.631 "dma_device_type": 1 00:12:20.631 }, 00:12:20.631 { 00:12:20.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.632 "dma_device_type": 2 00:12:20.632 } 00:12:20.632 ], 00:12:20.632 "driver_specific": { 00:12:20.632 "raid": { 00:12:20.632 "uuid": "0d360b7b-51f3-4991-ab2e-8da7be485474", 00:12:20.632 "strip_size_kb": 64, 00:12:20.632 "state": "online", 00:12:20.632 "raid_level": "concat", 00:12:20.632 "superblock": false, 00:12:20.632 "num_base_bdevs": 3, 00:12:20.632 "num_base_bdevs_discovered": 3, 00:12:20.632 "num_base_bdevs_operational": 3, 00:12:20.632 "base_bdevs_list": [ 00:12:20.632 { 00:12:20.632 "name": "BaseBdev1", 00:12:20.632 "uuid": "85b5a7f5-2175-42c0-8caf-0ae8e37f4ee8", 00:12:20.632 "is_configured": true, 00:12:20.632 "data_offset": 0, 00:12:20.632 "data_size": 65536 00:12:20.632 }, 00:12:20.632 { 00:12:20.632 "name": "BaseBdev2", 00:12:20.632 "uuid": "1aa06a97-d98d-4346-b432-279671b3ea4a", 00:12:20.632 "is_configured": true, 00:12:20.632 "data_offset": 0, 00:12:20.632 "data_size": 65536 00:12:20.632 }, 00:12:20.632 { 00:12:20.632 "name": "BaseBdev3", 00:12:20.632 "uuid": "920f48c5-8ccb-40fc-ac31-0ab93c388a85", 00:12:20.632 "is_configured": true, 00:12:20.632 "data_offset": 0, 00:12:20.632 "data_size": 65536 00:12:20.632 } 00:12:20.632 ] 00:12:20.632 } 00:12:20.632 } 00:12:20.632 }' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:20.632 BaseBdev2 00:12:20.632 BaseBdev3' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.632 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.632 [2024-11-20 07:09:02.862682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.632 [2024-11-20 07:09:02.862773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.632 [2024-11-20 07:09:02.862868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.890 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.890 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.890 "name": "Existed_Raid", 00:12:20.890 "uuid": "0d360b7b-51f3-4991-ab2e-8da7be485474", 00:12:20.890 "strip_size_kb": 64, 00:12:20.890 "state": "offline", 00:12:20.890 "raid_level": "concat", 00:12:20.890 "superblock": false, 00:12:20.890 "num_base_bdevs": 3, 00:12:20.890 "num_base_bdevs_discovered": 2, 00:12:20.890 "num_base_bdevs_operational": 2, 00:12:20.890 "base_bdevs_list": [ 00:12:20.890 { 00:12:20.890 "name": null, 00:12:20.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.890 "is_configured": false, 00:12:20.890 "data_offset": 0, 00:12:20.890 "data_size": 65536 00:12:20.890 }, 00:12:20.890 { 00:12:20.890 "name": "BaseBdev2", 00:12:20.890 "uuid": "1aa06a97-d98d-4346-b432-279671b3ea4a", 00:12:20.890 "is_configured": true, 00:12:20.890 "data_offset": 0, 00:12:20.890 "data_size": 65536 00:12:20.890 }, 00:12:20.890 { 00:12:20.890 "name": "BaseBdev3", 00:12:20.890 "uuid": "920f48c5-8ccb-40fc-ac31-0ab93c388a85", 00:12:20.890 "is_configured": true, 00:12:20.890 "data_offset": 0, 00:12:20.890 "data_size": 65536 00:12:20.890 } 00:12:20.890 ] 00:12:20.890 }' 00:12:20.890 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.890 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.149 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 [2024-11-20 07:09:03.452267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.407 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 [2024-11-20 07:09:03.623739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:21.407 [2024-11-20 07:09:03.623804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.665 BaseBdev2 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.665 [ 00:12:21.665 { 00:12:21.665 "name": "BaseBdev2", 00:12:21.665 "aliases": [ 00:12:21.665 "d8a97779-bb40-4f49-86ac-f642981dfc82" 00:12:21.665 ], 00:12:21.665 "product_name": "Malloc disk", 00:12:21.665 "block_size": 512, 00:12:21.665 "num_blocks": 65536, 00:12:21.665 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:21.665 "assigned_rate_limits": { 00:12:21.665 "rw_ios_per_sec": 0, 00:12:21.665 "rw_mbytes_per_sec": 0, 00:12:21.665 "r_mbytes_per_sec": 0, 00:12:21.665 "w_mbytes_per_sec": 0 00:12:21.665 }, 00:12:21.665 "claimed": false, 00:12:21.665 "zoned": false, 00:12:21.665 "supported_io_types": { 00:12:21.665 "read": true, 00:12:21.665 "write": true, 00:12:21.665 "unmap": true, 00:12:21.665 "flush": true, 00:12:21.665 "reset": true, 00:12:21.665 "nvme_admin": false, 00:12:21.665 "nvme_io": false, 00:12:21.665 "nvme_io_md": false, 00:12:21.665 "write_zeroes": true, 00:12:21.665 "zcopy": true, 00:12:21.665 "get_zone_info": false, 00:12:21.665 "zone_management": false, 00:12:21.665 "zone_append": false, 00:12:21.665 "compare": false, 00:12:21.665 "compare_and_write": false, 00:12:21.665 "abort": true, 00:12:21.665 "seek_hole": false, 00:12:21.665 "seek_data": false, 00:12:21.665 "copy": true, 00:12:21.665 "nvme_iov_md": false 00:12:21.665 }, 00:12:21.665 "memory_domains": [ 00:12:21.665 { 00:12:21.665 "dma_device_id": "system", 00:12:21.665 "dma_device_type": 1 00:12:21.665 }, 00:12:21.665 { 00:12:21.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.665 "dma_device_type": 2 00:12:21.665 } 00:12:21.665 ], 00:12:21.665 "driver_specific": {} 00:12:21.665 } 00:12:21.665 ] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.665 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 BaseBdev3 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 [ 00:12:21.924 { 00:12:21.924 "name": "BaseBdev3", 00:12:21.924 "aliases": [ 00:12:21.924 "9f41922e-c30e-4078-bbd7-c0b86a3dc36c" 00:12:21.924 ], 00:12:21.924 "product_name": "Malloc disk", 00:12:21.924 "block_size": 512, 00:12:21.924 "num_blocks": 65536, 00:12:21.924 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:21.924 "assigned_rate_limits": { 00:12:21.924 "rw_ios_per_sec": 0, 00:12:21.924 "rw_mbytes_per_sec": 0, 00:12:21.924 "r_mbytes_per_sec": 0, 00:12:21.924 "w_mbytes_per_sec": 0 00:12:21.924 }, 00:12:21.924 "claimed": false, 00:12:21.924 "zoned": false, 00:12:21.924 "supported_io_types": { 00:12:21.924 "read": true, 00:12:21.924 "write": true, 00:12:21.924 "unmap": true, 00:12:21.924 "flush": true, 00:12:21.924 "reset": true, 00:12:21.924 "nvme_admin": false, 00:12:21.924 "nvme_io": false, 00:12:21.924 "nvme_io_md": false, 00:12:21.924 "write_zeroes": true, 00:12:21.924 "zcopy": true, 00:12:21.924 "get_zone_info": false, 00:12:21.924 "zone_management": false, 00:12:21.924 "zone_append": false, 00:12:21.924 "compare": false, 00:12:21.924 "compare_and_write": false, 00:12:21.924 "abort": true, 00:12:21.924 "seek_hole": false, 00:12:21.924 "seek_data": false, 00:12:21.924 "copy": true, 00:12:21.924 "nvme_iov_md": false 00:12:21.924 }, 00:12:21.924 "memory_domains": [ 00:12:21.924 { 00:12:21.924 "dma_device_id": "system", 00:12:21.924 "dma_device_type": 1 00:12:21.924 }, 00:12:21.924 { 00:12:21.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.924 "dma_device_type": 2 00:12:21.924 } 00:12:21.924 ], 00:12:21.924 "driver_specific": {} 00:12:21.924 } 00:12:21.924 ] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 [2024-11-20 07:09:03.971928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.924 [2024-11-20 07:09:03.972043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.924 [2024-11-20 07:09:03.972104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.924 [2024-11-20 07:09:03.974300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.924 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.924 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.924 "name": "Existed_Raid", 00:12:21.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.924 "strip_size_kb": 64, 00:12:21.924 "state": "configuring", 00:12:21.924 "raid_level": "concat", 00:12:21.924 "superblock": false, 00:12:21.924 "num_base_bdevs": 3, 00:12:21.924 "num_base_bdevs_discovered": 2, 00:12:21.924 "num_base_bdevs_operational": 3, 00:12:21.924 "base_bdevs_list": [ 00:12:21.924 { 00:12:21.924 "name": "BaseBdev1", 00:12:21.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.924 "is_configured": false, 00:12:21.924 "data_offset": 0, 00:12:21.924 "data_size": 0 00:12:21.924 }, 00:12:21.924 { 00:12:21.924 "name": "BaseBdev2", 00:12:21.924 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:21.924 "is_configured": true, 00:12:21.924 "data_offset": 0, 00:12:21.924 "data_size": 65536 00:12:21.924 }, 00:12:21.924 { 00:12:21.924 "name": "BaseBdev3", 00:12:21.924 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:21.924 "is_configured": true, 00:12:21.924 "data_offset": 0, 00:12:21.924 "data_size": 65536 00:12:21.924 } 00:12:21.924 ] 00:12:21.924 }' 00:12:21.924 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.924 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 [2024-11-20 07:09:04.471091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.491 "name": "Existed_Raid", 00:12:22.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.491 "strip_size_kb": 64, 00:12:22.491 "state": "configuring", 00:12:22.491 "raid_level": "concat", 00:12:22.491 "superblock": false, 00:12:22.491 "num_base_bdevs": 3, 00:12:22.491 "num_base_bdevs_discovered": 1, 00:12:22.491 "num_base_bdevs_operational": 3, 00:12:22.491 "base_bdevs_list": [ 00:12:22.491 { 00:12:22.491 "name": "BaseBdev1", 00:12:22.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.491 "is_configured": false, 00:12:22.491 "data_offset": 0, 00:12:22.491 "data_size": 0 00:12:22.491 }, 00:12:22.491 { 00:12:22.491 "name": null, 00:12:22.491 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:22.491 "is_configured": false, 00:12:22.491 "data_offset": 0, 00:12:22.491 "data_size": 65536 00:12:22.491 }, 00:12:22.491 { 00:12:22.491 "name": "BaseBdev3", 00:12:22.491 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:22.491 "is_configured": true, 00:12:22.491 "data_offset": 0, 00:12:22.491 "data_size": 65536 00:12:22.491 } 00:12:22.491 ] 00:12:22.491 }' 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.491 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.749 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.749 07:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.749 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.749 07:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.749 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.007 [2024-11-20 07:09:05.054254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.007 BaseBdev1 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.007 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.007 [ 00:12:23.007 { 00:12:23.007 "name": "BaseBdev1", 00:12:23.007 "aliases": [ 00:12:23.007 "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc" 00:12:23.007 ], 00:12:23.007 "product_name": "Malloc disk", 00:12:23.007 "block_size": 512, 00:12:23.007 "num_blocks": 65536, 00:12:23.007 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:23.007 "assigned_rate_limits": { 00:12:23.007 "rw_ios_per_sec": 0, 00:12:23.007 "rw_mbytes_per_sec": 0, 00:12:23.007 "r_mbytes_per_sec": 0, 00:12:23.007 "w_mbytes_per_sec": 0 00:12:23.007 }, 00:12:23.007 "claimed": true, 00:12:23.007 "claim_type": "exclusive_write", 00:12:23.007 "zoned": false, 00:12:23.007 "supported_io_types": { 00:12:23.007 "read": true, 00:12:23.007 "write": true, 00:12:23.007 "unmap": true, 00:12:23.007 "flush": true, 00:12:23.007 "reset": true, 00:12:23.007 "nvme_admin": false, 00:12:23.007 "nvme_io": false, 00:12:23.007 "nvme_io_md": false, 00:12:23.007 "write_zeroes": true, 00:12:23.007 "zcopy": true, 00:12:23.008 "get_zone_info": false, 00:12:23.008 "zone_management": false, 00:12:23.008 "zone_append": false, 00:12:23.008 "compare": false, 00:12:23.008 "compare_and_write": false, 00:12:23.008 "abort": true, 00:12:23.008 "seek_hole": false, 00:12:23.008 "seek_data": false, 00:12:23.008 "copy": true, 00:12:23.008 "nvme_iov_md": false 00:12:23.008 }, 00:12:23.008 "memory_domains": [ 00:12:23.008 { 00:12:23.008 "dma_device_id": "system", 00:12:23.008 "dma_device_type": 1 00:12:23.008 }, 00:12:23.008 { 00:12:23.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.008 "dma_device_type": 2 00:12:23.008 } 00:12:23.008 ], 00:12:23.008 "driver_specific": {} 00:12:23.008 } 00:12:23.008 ] 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.008 "name": "Existed_Raid", 00:12:23.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.008 "strip_size_kb": 64, 00:12:23.008 "state": "configuring", 00:12:23.008 "raid_level": "concat", 00:12:23.008 "superblock": false, 00:12:23.008 "num_base_bdevs": 3, 00:12:23.008 "num_base_bdevs_discovered": 2, 00:12:23.008 "num_base_bdevs_operational": 3, 00:12:23.008 "base_bdevs_list": [ 00:12:23.008 { 00:12:23.008 "name": "BaseBdev1", 00:12:23.008 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:23.008 "is_configured": true, 00:12:23.008 "data_offset": 0, 00:12:23.008 "data_size": 65536 00:12:23.008 }, 00:12:23.008 { 00:12:23.008 "name": null, 00:12:23.008 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:23.008 "is_configured": false, 00:12:23.008 "data_offset": 0, 00:12:23.008 "data_size": 65536 00:12:23.008 }, 00:12:23.008 { 00:12:23.008 "name": "BaseBdev3", 00:12:23.008 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:23.008 "is_configured": true, 00:12:23.008 "data_offset": 0, 00:12:23.008 "data_size": 65536 00:12:23.008 } 00:12:23.008 ] 00:12:23.008 }' 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.008 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.606 [2024-11-20 07:09:05.597436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.606 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.607 "name": "Existed_Raid", 00:12:23.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.607 "strip_size_kb": 64, 00:12:23.607 "state": "configuring", 00:12:23.607 "raid_level": "concat", 00:12:23.607 "superblock": false, 00:12:23.607 "num_base_bdevs": 3, 00:12:23.607 "num_base_bdevs_discovered": 1, 00:12:23.607 "num_base_bdevs_operational": 3, 00:12:23.607 "base_bdevs_list": [ 00:12:23.607 { 00:12:23.607 "name": "BaseBdev1", 00:12:23.607 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:23.607 "is_configured": true, 00:12:23.607 "data_offset": 0, 00:12:23.607 "data_size": 65536 00:12:23.607 }, 00:12:23.607 { 00:12:23.607 "name": null, 00:12:23.607 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:23.607 "is_configured": false, 00:12:23.607 "data_offset": 0, 00:12:23.607 "data_size": 65536 00:12:23.607 }, 00:12:23.607 { 00:12:23.607 "name": null, 00:12:23.607 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:23.607 "is_configured": false, 00:12:23.607 "data_offset": 0, 00:12:23.607 "data_size": 65536 00:12:23.607 } 00:12:23.607 ] 00:12:23.607 }' 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.607 07:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 [2024-11-20 07:09:06.088714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.864 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.122 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.122 "name": "Existed_Raid", 00:12:24.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.122 "strip_size_kb": 64, 00:12:24.122 "state": "configuring", 00:12:24.122 "raid_level": "concat", 00:12:24.122 "superblock": false, 00:12:24.122 "num_base_bdevs": 3, 00:12:24.122 "num_base_bdevs_discovered": 2, 00:12:24.122 "num_base_bdevs_operational": 3, 00:12:24.122 "base_bdevs_list": [ 00:12:24.122 { 00:12:24.122 "name": "BaseBdev1", 00:12:24.122 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:24.122 "is_configured": true, 00:12:24.122 "data_offset": 0, 00:12:24.122 "data_size": 65536 00:12:24.122 }, 00:12:24.122 { 00:12:24.122 "name": null, 00:12:24.122 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:24.122 "is_configured": false, 00:12:24.122 "data_offset": 0, 00:12:24.122 "data_size": 65536 00:12:24.122 }, 00:12:24.122 { 00:12:24.122 "name": "BaseBdev3", 00:12:24.122 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:24.122 "is_configured": true, 00:12:24.122 "data_offset": 0, 00:12:24.122 "data_size": 65536 00:12:24.122 } 00:12:24.122 ] 00:12:24.122 }' 00:12:24.122 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.122 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.380 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.380 [2024-11-20 07:09:06.615881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.638 "name": "Existed_Raid", 00:12:24.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.638 "strip_size_kb": 64, 00:12:24.638 "state": "configuring", 00:12:24.638 "raid_level": "concat", 00:12:24.638 "superblock": false, 00:12:24.638 "num_base_bdevs": 3, 00:12:24.638 "num_base_bdevs_discovered": 1, 00:12:24.638 "num_base_bdevs_operational": 3, 00:12:24.638 "base_bdevs_list": [ 00:12:24.638 { 00:12:24.638 "name": null, 00:12:24.638 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:24.638 "is_configured": false, 00:12:24.638 "data_offset": 0, 00:12:24.638 "data_size": 65536 00:12:24.638 }, 00:12:24.638 { 00:12:24.638 "name": null, 00:12:24.638 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:24.638 "is_configured": false, 00:12:24.638 "data_offset": 0, 00:12:24.638 "data_size": 65536 00:12:24.638 }, 00:12:24.638 { 00:12:24.638 "name": "BaseBdev3", 00:12:24.638 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:24.638 "is_configured": true, 00:12:24.638 "data_offset": 0, 00:12:24.638 "data_size": 65536 00:12:24.638 } 00:12:24.638 ] 00:12:24.638 }' 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.638 07:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.204 [2024-11-20 07:09:07.247147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.204 "name": "Existed_Raid", 00:12:25.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.204 "strip_size_kb": 64, 00:12:25.204 "state": "configuring", 00:12:25.204 "raid_level": "concat", 00:12:25.204 "superblock": false, 00:12:25.204 "num_base_bdevs": 3, 00:12:25.204 "num_base_bdevs_discovered": 2, 00:12:25.204 "num_base_bdevs_operational": 3, 00:12:25.204 "base_bdevs_list": [ 00:12:25.204 { 00:12:25.204 "name": null, 00:12:25.204 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:25.204 "is_configured": false, 00:12:25.204 "data_offset": 0, 00:12:25.204 "data_size": 65536 00:12:25.204 }, 00:12:25.204 { 00:12:25.204 "name": "BaseBdev2", 00:12:25.204 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:25.204 "is_configured": true, 00:12:25.204 "data_offset": 0, 00:12:25.204 "data_size": 65536 00:12:25.204 }, 00:12:25.204 { 00:12:25.204 "name": "BaseBdev3", 00:12:25.204 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:25.204 "is_configured": true, 00:12:25.204 "data_offset": 0, 00:12:25.204 "data_size": 65536 00:12:25.204 } 00:12:25.204 ] 00:12:25.204 }' 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.204 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:25.462 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 [2024-11-20 07:09:07.785395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:25.720 [2024-11-20 07:09:07.785447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.720 [2024-11-20 07:09:07.785457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:25.720 [2024-11-20 07:09:07.785736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:25.720 [2024-11-20 07:09:07.785913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.720 [2024-11-20 07:09:07.785924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:25.720 [2024-11-20 07:09:07.786211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.720 NewBaseBdev 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 [ 00:12:25.720 { 00:12:25.720 "name": "NewBaseBdev", 00:12:25.720 "aliases": [ 00:12:25.720 "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc" 00:12:25.720 ], 00:12:25.720 "product_name": "Malloc disk", 00:12:25.720 "block_size": 512, 00:12:25.720 "num_blocks": 65536, 00:12:25.720 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:25.720 "assigned_rate_limits": { 00:12:25.720 "rw_ios_per_sec": 0, 00:12:25.720 "rw_mbytes_per_sec": 0, 00:12:25.720 "r_mbytes_per_sec": 0, 00:12:25.720 "w_mbytes_per_sec": 0 00:12:25.720 }, 00:12:25.720 "claimed": true, 00:12:25.720 "claim_type": "exclusive_write", 00:12:25.720 "zoned": false, 00:12:25.720 "supported_io_types": { 00:12:25.720 "read": true, 00:12:25.720 "write": true, 00:12:25.720 "unmap": true, 00:12:25.720 "flush": true, 00:12:25.720 "reset": true, 00:12:25.720 "nvme_admin": false, 00:12:25.720 "nvme_io": false, 00:12:25.720 "nvme_io_md": false, 00:12:25.720 "write_zeroes": true, 00:12:25.720 "zcopy": true, 00:12:25.720 "get_zone_info": false, 00:12:25.720 "zone_management": false, 00:12:25.720 "zone_append": false, 00:12:25.720 "compare": false, 00:12:25.720 "compare_and_write": false, 00:12:25.720 "abort": true, 00:12:25.720 "seek_hole": false, 00:12:25.720 "seek_data": false, 00:12:25.720 "copy": true, 00:12:25.720 "nvme_iov_md": false 00:12:25.720 }, 00:12:25.720 "memory_domains": [ 00:12:25.720 { 00:12:25.720 "dma_device_id": "system", 00:12:25.720 "dma_device_type": 1 00:12:25.720 }, 00:12:25.720 { 00:12:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.720 "dma_device_type": 2 00:12:25.720 } 00:12:25.720 ], 00:12:25.720 "driver_specific": {} 00:12:25.720 } 00:12:25.720 ] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.720 "name": "Existed_Raid", 00:12:25.720 "uuid": "d4403275-5e39-4ba0-9c88-573b4777c674", 00:12:25.720 "strip_size_kb": 64, 00:12:25.720 "state": "online", 00:12:25.720 "raid_level": "concat", 00:12:25.720 "superblock": false, 00:12:25.720 "num_base_bdevs": 3, 00:12:25.720 "num_base_bdevs_discovered": 3, 00:12:25.720 "num_base_bdevs_operational": 3, 00:12:25.720 "base_bdevs_list": [ 00:12:25.720 { 00:12:25.720 "name": "NewBaseBdev", 00:12:25.720 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:25.720 "is_configured": true, 00:12:25.720 "data_offset": 0, 00:12:25.720 "data_size": 65536 00:12:25.720 }, 00:12:25.720 { 00:12:25.720 "name": "BaseBdev2", 00:12:25.720 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:25.720 "is_configured": true, 00:12:25.720 "data_offset": 0, 00:12:25.720 "data_size": 65536 00:12:25.720 }, 00:12:25.721 { 00:12:25.721 "name": "BaseBdev3", 00:12:25.721 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:25.721 "is_configured": true, 00:12:25.721 "data_offset": 0, 00:12:25.721 "data_size": 65536 00:12:25.721 } 00:12:25.721 ] 00:12:25.721 }' 00:12:25.721 07:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.721 07:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.287 [2024-11-20 07:09:08.281013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.287 "name": "Existed_Raid", 00:12:26.287 "aliases": [ 00:12:26.287 "d4403275-5e39-4ba0-9c88-573b4777c674" 00:12:26.287 ], 00:12:26.287 "product_name": "Raid Volume", 00:12:26.287 "block_size": 512, 00:12:26.287 "num_blocks": 196608, 00:12:26.287 "uuid": "d4403275-5e39-4ba0-9c88-573b4777c674", 00:12:26.287 "assigned_rate_limits": { 00:12:26.287 "rw_ios_per_sec": 0, 00:12:26.287 "rw_mbytes_per_sec": 0, 00:12:26.287 "r_mbytes_per_sec": 0, 00:12:26.287 "w_mbytes_per_sec": 0 00:12:26.287 }, 00:12:26.287 "claimed": false, 00:12:26.287 "zoned": false, 00:12:26.287 "supported_io_types": { 00:12:26.287 "read": true, 00:12:26.287 "write": true, 00:12:26.287 "unmap": true, 00:12:26.287 "flush": true, 00:12:26.287 "reset": true, 00:12:26.287 "nvme_admin": false, 00:12:26.287 "nvme_io": false, 00:12:26.287 "nvme_io_md": false, 00:12:26.287 "write_zeroes": true, 00:12:26.287 "zcopy": false, 00:12:26.287 "get_zone_info": false, 00:12:26.287 "zone_management": false, 00:12:26.287 "zone_append": false, 00:12:26.287 "compare": false, 00:12:26.287 "compare_and_write": false, 00:12:26.287 "abort": false, 00:12:26.287 "seek_hole": false, 00:12:26.287 "seek_data": false, 00:12:26.287 "copy": false, 00:12:26.287 "nvme_iov_md": false 00:12:26.287 }, 00:12:26.287 "memory_domains": [ 00:12:26.287 { 00:12:26.287 "dma_device_id": "system", 00:12:26.287 "dma_device_type": 1 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.287 "dma_device_type": 2 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "dma_device_id": "system", 00:12:26.287 "dma_device_type": 1 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.287 "dma_device_type": 2 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "dma_device_id": "system", 00:12:26.287 "dma_device_type": 1 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.287 "dma_device_type": 2 00:12:26.287 } 00:12:26.287 ], 00:12:26.287 "driver_specific": { 00:12:26.287 "raid": { 00:12:26.287 "uuid": "d4403275-5e39-4ba0-9c88-573b4777c674", 00:12:26.287 "strip_size_kb": 64, 00:12:26.287 "state": "online", 00:12:26.287 "raid_level": "concat", 00:12:26.287 "superblock": false, 00:12:26.287 "num_base_bdevs": 3, 00:12:26.287 "num_base_bdevs_discovered": 3, 00:12:26.287 "num_base_bdevs_operational": 3, 00:12:26.287 "base_bdevs_list": [ 00:12:26.287 { 00:12:26.287 "name": "NewBaseBdev", 00:12:26.287 "uuid": "fd0d95de-55bd-4b9e-a7fd-6f6fe08f91fc", 00:12:26.287 "is_configured": true, 00:12:26.287 "data_offset": 0, 00:12:26.287 "data_size": 65536 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "name": "BaseBdev2", 00:12:26.287 "uuid": "d8a97779-bb40-4f49-86ac-f642981dfc82", 00:12:26.287 "is_configured": true, 00:12:26.287 "data_offset": 0, 00:12:26.287 "data_size": 65536 00:12:26.287 }, 00:12:26.287 { 00:12:26.287 "name": "BaseBdev3", 00:12:26.287 "uuid": "9f41922e-c30e-4078-bbd7-c0b86a3dc36c", 00:12:26.287 "is_configured": true, 00:12:26.287 "data_offset": 0, 00:12:26.287 "data_size": 65536 00:12:26.287 } 00:12:26.287 ] 00:12:26.287 } 00:12:26.287 } 00:12:26.287 }' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:26.287 BaseBdev2 00:12:26.287 BaseBdev3' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:26.287 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.288 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.288 [2024-11-20 07:09:08.544236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:26.288 [2024-11-20 07:09:08.544268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.288 [2024-11-20 07:09:08.544378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.288 [2024-11-20 07:09:08.544448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.288 [2024-11-20 07:09:08.544461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65910 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65910 ']' 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65910 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65910 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65910' 00:12:26.545 killing process with pid 65910 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65910 00:12:26.545 [2024-11-20 07:09:08.593723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.545 07:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65910 00:12:26.803 [2024-11-20 07:09:08.916288] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:28.184 00:12:28.184 real 0m11.044s 00:12:28.184 user 0m17.609s 00:12:28.184 sys 0m1.784s 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.184 ************************************ 00:12:28.184 END TEST raid_state_function_test 00:12:28.184 ************************************ 00:12:28.184 07:09:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:28.184 07:09:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.184 07:09:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.184 07:09:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.184 ************************************ 00:12:28.184 START TEST raid_state_function_test_sb 00:12:28.184 ************************************ 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:28.184 Process raid pid: 66537 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66537 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66537' 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66537 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:28.184 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66537 ']' 00:12:28.185 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.185 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.185 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.185 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.185 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.185 [2024-11-20 07:09:10.292466] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:28.185 [2024-11-20 07:09:10.292659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.444 [2024-11-20 07:09:10.472366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.444 [2024-11-20 07:09:10.597944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.703 [2024-11-20 07:09:10.819957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.703 [2024-11-20 07:09:10.820086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.962 [2024-11-20 07:09:11.166768] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.962 [2024-11-20 07:09:11.166880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.962 [2024-11-20 07:09:11.166916] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.962 [2024-11-20 07:09:11.166969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.962 [2024-11-20 07:09:11.167000] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.962 [2024-11-20 07:09:11.167034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.962 "name": "Existed_Raid", 00:12:28.962 "uuid": "261cdbbc-c465-4ec5-acc2-19d85d5d62d3", 00:12:28.962 "strip_size_kb": 64, 00:12:28.962 "state": "configuring", 00:12:28.962 "raid_level": "concat", 00:12:28.962 "superblock": true, 00:12:28.962 "num_base_bdevs": 3, 00:12:28.962 "num_base_bdevs_discovered": 0, 00:12:28.962 "num_base_bdevs_operational": 3, 00:12:28.962 "base_bdevs_list": [ 00:12:28.962 { 00:12:28.962 "name": "BaseBdev1", 00:12:28.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.962 "is_configured": false, 00:12:28.962 "data_offset": 0, 00:12:28.962 "data_size": 0 00:12:28.962 }, 00:12:28.962 { 00:12:28.962 "name": "BaseBdev2", 00:12:28.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.962 "is_configured": false, 00:12:28.962 "data_offset": 0, 00:12:28.962 "data_size": 0 00:12:28.962 }, 00:12:28.962 { 00:12:28.962 "name": "BaseBdev3", 00:12:28.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.962 "is_configured": false, 00:12:28.962 "data_offset": 0, 00:12:28.962 "data_size": 0 00:12:28.962 } 00:12:28.962 ] 00:12:28.962 }' 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.962 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 [2024-11-20 07:09:11.641908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.530 [2024-11-20 07:09:11.642014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 [2024-11-20 07:09:11.653897] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.530 [2024-11-20 07:09:11.653947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.530 [2024-11-20 07:09:11.653958] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.530 [2024-11-20 07:09:11.653970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.530 [2024-11-20 07:09:11.653978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.530 [2024-11-20 07:09:11.653989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 [2024-11-20 07:09:11.701737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.530 BaseBdev1 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 [ 00:12:29.530 { 00:12:29.530 "name": "BaseBdev1", 00:12:29.530 "aliases": [ 00:12:29.530 "86752ada-70c7-4116-83cc-2f5248155ae2" 00:12:29.530 ], 00:12:29.530 "product_name": "Malloc disk", 00:12:29.530 "block_size": 512, 00:12:29.530 "num_blocks": 65536, 00:12:29.530 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:29.530 "assigned_rate_limits": { 00:12:29.530 "rw_ios_per_sec": 0, 00:12:29.530 "rw_mbytes_per_sec": 0, 00:12:29.530 "r_mbytes_per_sec": 0, 00:12:29.530 "w_mbytes_per_sec": 0 00:12:29.530 }, 00:12:29.530 "claimed": true, 00:12:29.530 "claim_type": "exclusive_write", 00:12:29.530 "zoned": false, 00:12:29.530 "supported_io_types": { 00:12:29.530 "read": true, 00:12:29.530 "write": true, 00:12:29.530 "unmap": true, 00:12:29.530 "flush": true, 00:12:29.530 "reset": true, 00:12:29.530 "nvme_admin": false, 00:12:29.530 "nvme_io": false, 00:12:29.530 "nvme_io_md": false, 00:12:29.530 "write_zeroes": true, 00:12:29.531 "zcopy": true, 00:12:29.531 "get_zone_info": false, 00:12:29.531 "zone_management": false, 00:12:29.531 "zone_append": false, 00:12:29.531 "compare": false, 00:12:29.531 "compare_and_write": false, 00:12:29.531 "abort": true, 00:12:29.531 "seek_hole": false, 00:12:29.531 "seek_data": false, 00:12:29.531 "copy": true, 00:12:29.531 "nvme_iov_md": false 00:12:29.531 }, 00:12:29.531 "memory_domains": [ 00:12:29.531 { 00:12:29.531 "dma_device_id": "system", 00:12:29.531 "dma_device_type": 1 00:12:29.531 }, 00:12:29.531 { 00:12:29.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.531 "dma_device_type": 2 00:12:29.531 } 00:12:29.531 ], 00:12:29.531 "driver_specific": {} 00:12:29.531 } 00:12:29.531 ] 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.531 "name": "Existed_Raid", 00:12:29.531 "uuid": "b1f06973-0c77-4b0d-adb2-f9f32fd60950", 00:12:29.531 "strip_size_kb": 64, 00:12:29.531 "state": "configuring", 00:12:29.531 "raid_level": "concat", 00:12:29.531 "superblock": true, 00:12:29.531 "num_base_bdevs": 3, 00:12:29.531 "num_base_bdevs_discovered": 1, 00:12:29.531 "num_base_bdevs_operational": 3, 00:12:29.531 "base_bdevs_list": [ 00:12:29.531 { 00:12:29.531 "name": "BaseBdev1", 00:12:29.531 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:29.531 "is_configured": true, 00:12:29.531 "data_offset": 2048, 00:12:29.531 "data_size": 63488 00:12:29.531 }, 00:12:29.531 { 00:12:29.531 "name": "BaseBdev2", 00:12:29.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.531 "is_configured": false, 00:12:29.531 "data_offset": 0, 00:12:29.531 "data_size": 0 00:12:29.531 }, 00:12:29.531 { 00:12:29.531 "name": "BaseBdev3", 00:12:29.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.531 "is_configured": false, 00:12:29.531 "data_offset": 0, 00:12:29.531 "data_size": 0 00:12:29.531 } 00:12:29.531 ] 00:12:29.531 }' 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.531 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.099 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.099 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.099 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.099 [2024-11-20 07:09:12.157058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.100 [2024-11-20 07:09:12.157179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.100 [2024-11-20 07:09:12.169090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.100 [2024-11-20 07:09:12.171272] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.100 [2024-11-20 07:09:12.171365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.100 [2024-11-20 07:09:12.171400] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.100 [2024-11-20 07:09:12.171427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.100 "name": "Existed_Raid", 00:12:30.100 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:30.100 "strip_size_kb": 64, 00:12:30.100 "state": "configuring", 00:12:30.100 "raid_level": "concat", 00:12:30.100 "superblock": true, 00:12:30.100 "num_base_bdevs": 3, 00:12:30.100 "num_base_bdevs_discovered": 1, 00:12:30.100 "num_base_bdevs_operational": 3, 00:12:30.100 "base_bdevs_list": [ 00:12:30.100 { 00:12:30.100 "name": "BaseBdev1", 00:12:30.100 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:30.100 "is_configured": true, 00:12:30.100 "data_offset": 2048, 00:12:30.100 "data_size": 63488 00:12:30.100 }, 00:12:30.100 { 00:12:30.100 "name": "BaseBdev2", 00:12:30.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.100 "is_configured": false, 00:12:30.100 "data_offset": 0, 00:12:30.100 "data_size": 0 00:12:30.100 }, 00:12:30.100 { 00:12:30.100 "name": "BaseBdev3", 00:12:30.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.100 "is_configured": false, 00:12:30.100 "data_offset": 0, 00:12:30.100 "data_size": 0 00:12:30.100 } 00:12:30.100 ] 00:12:30.100 }' 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.100 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.358 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.358 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.358 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.616 [2024-11-20 07:09:12.635301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.616 BaseBdev2 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.616 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.616 [ 00:12:30.616 { 00:12:30.616 "name": "BaseBdev2", 00:12:30.616 "aliases": [ 00:12:30.616 "142a4367-58d8-4a05-9989-d1381a877d87" 00:12:30.616 ], 00:12:30.616 "product_name": "Malloc disk", 00:12:30.616 "block_size": 512, 00:12:30.616 "num_blocks": 65536, 00:12:30.616 "uuid": "142a4367-58d8-4a05-9989-d1381a877d87", 00:12:30.616 "assigned_rate_limits": { 00:12:30.616 "rw_ios_per_sec": 0, 00:12:30.616 "rw_mbytes_per_sec": 0, 00:12:30.616 "r_mbytes_per_sec": 0, 00:12:30.616 "w_mbytes_per_sec": 0 00:12:30.616 }, 00:12:30.616 "claimed": true, 00:12:30.616 "claim_type": "exclusive_write", 00:12:30.616 "zoned": false, 00:12:30.616 "supported_io_types": { 00:12:30.616 "read": true, 00:12:30.616 "write": true, 00:12:30.616 "unmap": true, 00:12:30.616 "flush": true, 00:12:30.616 "reset": true, 00:12:30.616 "nvme_admin": false, 00:12:30.616 "nvme_io": false, 00:12:30.616 "nvme_io_md": false, 00:12:30.616 "write_zeroes": true, 00:12:30.616 "zcopy": true, 00:12:30.616 "get_zone_info": false, 00:12:30.616 "zone_management": false, 00:12:30.616 "zone_append": false, 00:12:30.616 "compare": false, 00:12:30.617 "compare_and_write": false, 00:12:30.617 "abort": true, 00:12:30.617 "seek_hole": false, 00:12:30.617 "seek_data": false, 00:12:30.617 "copy": true, 00:12:30.617 "nvme_iov_md": false 00:12:30.617 }, 00:12:30.617 "memory_domains": [ 00:12:30.617 { 00:12:30.617 "dma_device_id": "system", 00:12:30.617 "dma_device_type": 1 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.617 "dma_device_type": 2 00:12:30.617 } 00:12:30.617 ], 00:12:30.617 "driver_specific": {} 00:12:30.617 } 00:12:30.617 ] 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.617 "name": "Existed_Raid", 00:12:30.617 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:30.617 "strip_size_kb": 64, 00:12:30.617 "state": "configuring", 00:12:30.617 "raid_level": "concat", 00:12:30.617 "superblock": true, 00:12:30.617 "num_base_bdevs": 3, 00:12:30.617 "num_base_bdevs_discovered": 2, 00:12:30.617 "num_base_bdevs_operational": 3, 00:12:30.617 "base_bdevs_list": [ 00:12:30.617 { 00:12:30.617 "name": "BaseBdev1", 00:12:30.617 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:30.617 "is_configured": true, 00:12:30.617 "data_offset": 2048, 00:12:30.617 "data_size": 63488 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "name": "BaseBdev2", 00:12:30.617 "uuid": "142a4367-58d8-4a05-9989-d1381a877d87", 00:12:30.617 "is_configured": true, 00:12:30.617 "data_offset": 2048, 00:12:30.617 "data_size": 63488 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "name": "BaseBdev3", 00:12:30.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.617 "is_configured": false, 00:12:30.617 "data_offset": 0, 00:12:30.617 "data_size": 0 00:12:30.617 } 00:12:30.617 ] 00:12:30.617 }' 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.617 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.883 [2024-11-20 07:09:13.095166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.883 [2024-11-20 07:09:13.095581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.883 [2024-11-20 07:09:13.095653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:30.883 [2024-11-20 07:09:13.096009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.883 [2024-11-20 07:09:13.096249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.883 BaseBdev3 00:12:30.883 [2024-11-20 07:09:13.096300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:30.883 [2024-11-20 07:09:13.096518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.883 [ 00:12:30.883 { 00:12:30.883 "name": "BaseBdev3", 00:12:30.883 "aliases": [ 00:12:30.883 "21cc719d-7469-45d4-9330-0b72748b622a" 00:12:30.883 ], 00:12:30.883 "product_name": "Malloc disk", 00:12:30.883 "block_size": 512, 00:12:30.883 "num_blocks": 65536, 00:12:30.883 "uuid": "21cc719d-7469-45d4-9330-0b72748b622a", 00:12:30.883 "assigned_rate_limits": { 00:12:30.883 "rw_ios_per_sec": 0, 00:12:30.883 "rw_mbytes_per_sec": 0, 00:12:30.883 "r_mbytes_per_sec": 0, 00:12:30.883 "w_mbytes_per_sec": 0 00:12:30.883 }, 00:12:30.883 "claimed": true, 00:12:30.883 "claim_type": "exclusive_write", 00:12:30.883 "zoned": false, 00:12:30.883 "supported_io_types": { 00:12:30.883 "read": true, 00:12:30.883 "write": true, 00:12:30.883 "unmap": true, 00:12:30.883 "flush": true, 00:12:30.883 "reset": true, 00:12:30.883 "nvme_admin": false, 00:12:30.883 "nvme_io": false, 00:12:30.883 "nvme_io_md": false, 00:12:30.883 "write_zeroes": true, 00:12:30.883 "zcopy": true, 00:12:30.883 "get_zone_info": false, 00:12:30.883 "zone_management": false, 00:12:30.883 "zone_append": false, 00:12:30.883 "compare": false, 00:12:30.883 "compare_and_write": false, 00:12:30.883 "abort": true, 00:12:30.883 "seek_hole": false, 00:12:30.883 "seek_data": false, 00:12:30.883 "copy": true, 00:12:30.883 "nvme_iov_md": false 00:12:30.883 }, 00:12:30.883 "memory_domains": [ 00:12:30.883 { 00:12:30.883 "dma_device_id": "system", 00:12:30.883 "dma_device_type": 1 00:12:30.883 }, 00:12:30.883 { 00:12:30.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.883 "dma_device_type": 2 00:12:30.883 } 00:12:30.883 ], 00:12:30.883 "driver_specific": {} 00:12:30.883 } 00:12:30.883 ] 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.883 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.145 "name": "Existed_Raid", 00:12:31.145 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:31.145 "strip_size_kb": 64, 00:12:31.145 "state": "online", 00:12:31.145 "raid_level": "concat", 00:12:31.145 "superblock": true, 00:12:31.145 "num_base_bdevs": 3, 00:12:31.145 "num_base_bdevs_discovered": 3, 00:12:31.145 "num_base_bdevs_operational": 3, 00:12:31.145 "base_bdevs_list": [ 00:12:31.145 { 00:12:31.145 "name": "BaseBdev1", 00:12:31.145 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:31.145 "is_configured": true, 00:12:31.145 "data_offset": 2048, 00:12:31.145 "data_size": 63488 00:12:31.145 }, 00:12:31.145 { 00:12:31.145 "name": "BaseBdev2", 00:12:31.145 "uuid": "142a4367-58d8-4a05-9989-d1381a877d87", 00:12:31.145 "is_configured": true, 00:12:31.145 "data_offset": 2048, 00:12:31.145 "data_size": 63488 00:12:31.145 }, 00:12:31.145 { 00:12:31.145 "name": "BaseBdev3", 00:12:31.145 "uuid": "21cc719d-7469-45d4-9330-0b72748b622a", 00:12:31.145 "is_configured": true, 00:12:31.145 "data_offset": 2048, 00:12:31.145 "data_size": 63488 00:12:31.145 } 00:12:31.145 ] 00:12:31.145 }' 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.145 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.403 [2024-11-20 07:09:13.630769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.403 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.663 "name": "Existed_Raid", 00:12:31.663 "aliases": [ 00:12:31.663 "adaaf497-6b2b-4483-a6a4-a6fa0a455766" 00:12:31.663 ], 00:12:31.663 "product_name": "Raid Volume", 00:12:31.663 "block_size": 512, 00:12:31.663 "num_blocks": 190464, 00:12:31.663 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:31.663 "assigned_rate_limits": { 00:12:31.663 "rw_ios_per_sec": 0, 00:12:31.663 "rw_mbytes_per_sec": 0, 00:12:31.663 "r_mbytes_per_sec": 0, 00:12:31.663 "w_mbytes_per_sec": 0 00:12:31.663 }, 00:12:31.663 "claimed": false, 00:12:31.663 "zoned": false, 00:12:31.663 "supported_io_types": { 00:12:31.663 "read": true, 00:12:31.663 "write": true, 00:12:31.663 "unmap": true, 00:12:31.663 "flush": true, 00:12:31.663 "reset": true, 00:12:31.663 "nvme_admin": false, 00:12:31.663 "nvme_io": false, 00:12:31.663 "nvme_io_md": false, 00:12:31.663 "write_zeroes": true, 00:12:31.663 "zcopy": false, 00:12:31.663 "get_zone_info": false, 00:12:31.663 "zone_management": false, 00:12:31.663 "zone_append": false, 00:12:31.663 "compare": false, 00:12:31.663 "compare_and_write": false, 00:12:31.663 "abort": false, 00:12:31.663 "seek_hole": false, 00:12:31.663 "seek_data": false, 00:12:31.663 "copy": false, 00:12:31.663 "nvme_iov_md": false 00:12:31.663 }, 00:12:31.663 "memory_domains": [ 00:12:31.663 { 00:12:31.663 "dma_device_id": "system", 00:12:31.663 "dma_device_type": 1 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.663 "dma_device_type": 2 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "dma_device_id": "system", 00:12:31.663 "dma_device_type": 1 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.663 "dma_device_type": 2 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "dma_device_id": "system", 00:12:31.663 "dma_device_type": 1 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.663 "dma_device_type": 2 00:12:31.663 } 00:12:31.663 ], 00:12:31.663 "driver_specific": { 00:12:31.663 "raid": { 00:12:31.663 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:31.663 "strip_size_kb": 64, 00:12:31.663 "state": "online", 00:12:31.663 "raid_level": "concat", 00:12:31.663 "superblock": true, 00:12:31.663 "num_base_bdevs": 3, 00:12:31.663 "num_base_bdevs_discovered": 3, 00:12:31.663 "num_base_bdevs_operational": 3, 00:12:31.663 "base_bdevs_list": [ 00:12:31.663 { 00:12:31.663 "name": "BaseBdev1", 00:12:31.663 "uuid": "86752ada-70c7-4116-83cc-2f5248155ae2", 00:12:31.663 "is_configured": true, 00:12:31.663 "data_offset": 2048, 00:12:31.663 "data_size": 63488 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "name": "BaseBdev2", 00:12:31.663 "uuid": "142a4367-58d8-4a05-9989-d1381a877d87", 00:12:31.663 "is_configured": true, 00:12:31.663 "data_offset": 2048, 00:12:31.663 "data_size": 63488 00:12:31.663 }, 00:12:31.663 { 00:12:31.663 "name": "BaseBdev3", 00:12:31.663 "uuid": "21cc719d-7469-45d4-9330-0b72748b622a", 00:12:31.663 "is_configured": true, 00:12:31.663 "data_offset": 2048, 00:12:31.663 "data_size": 63488 00:12:31.663 } 00:12:31.663 ] 00:12:31.663 } 00:12:31.663 } 00:12:31.663 }' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:31.663 BaseBdev2 00:12:31.663 BaseBdev3' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.663 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.664 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.664 [2024-11-20 07:09:13.902038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.664 [2024-11-20 07:09:13.902079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.664 [2024-11-20 07:09:13.902143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.922 "name": "Existed_Raid", 00:12:31.922 "uuid": "adaaf497-6b2b-4483-a6a4-a6fa0a455766", 00:12:31.922 "strip_size_kb": 64, 00:12:31.922 "state": "offline", 00:12:31.922 "raid_level": "concat", 00:12:31.922 "superblock": true, 00:12:31.922 "num_base_bdevs": 3, 00:12:31.922 "num_base_bdevs_discovered": 2, 00:12:31.922 "num_base_bdevs_operational": 2, 00:12:31.922 "base_bdevs_list": [ 00:12:31.922 { 00:12:31.922 "name": null, 00:12:31.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.922 "is_configured": false, 00:12:31.922 "data_offset": 0, 00:12:31.922 "data_size": 63488 00:12:31.922 }, 00:12:31.922 { 00:12:31.922 "name": "BaseBdev2", 00:12:31.922 "uuid": "142a4367-58d8-4a05-9989-d1381a877d87", 00:12:31.922 "is_configured": true, 00:12:31.922 "data_offset": 2048, 00:12:31.922 "data_size": 63488 00:12:31.922 }, 00:12:31.922 { 00:12:31.922 "name": "BaseBdev3", 00:12:31.922 "uuid": "21cc719d-7469-45d4-9330-0b72748b622a", 00:12:31.922 "is_configured": true, 00:12:31.922 "data_offset": 2048, 00:12:31.922 "data_size": 63488 00:12:31.922 } 00:12:31.922 ] 00:12:31.922 }' 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.922 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.182 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.442 [2024-11-20 07:09:14.493837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.442 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.442 [2024-11-20 07:09:14.654521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.442 [2024-11-20 07:09:14.654631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 BaseBdev2 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 [ 00:12:32.703 { 00:12:32.703 "name": "BaseBdev2", 00:12:32.703 "aliases": [ 00:12:32.703 "2facedad-1f47-4ff8-8926-7d507707167e" 00:12:32.703 ], 00:12:32.703 "product_name": "Malloc disk", 00:12:32.703 "block_size": 512, 00:12:32.703 "num_blocks": 65536, 00:12:32.703 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:32.703 "assigned_rate_limits": { 00:12:32.703 "rw_ios_per_sec": 0, 00:12:32.703 "rw_mbytes_per_sec": 0, 00:12:32.703 "r_mbytes_per_sec": 0, 00:12:32.703 "w_mbytes_per_sec": 0 00:12:32.703 }, 00:12:32.703 "claimed": false, 00:12:32.703 "zoned": false, 00:12:32.703 "supported_io_types": { 00:12:32.703 "read": true, 00:12:32.703 "write": true, 00:12:32.703 "unmap": true, 00:12:32.703 "flush": true, 00:12:32.703 "reset": true, 00:12:32.703 "nvme_admin": false, 00:12:32.703 "nvme_io": false, 00:12:32.703 "nvme_io_md": false, 00:12:32.703 "write_zeroes": true, 00:12:32.703 "zcopy": true, 00:12:32.703 "get_zone_info": false, 00:12:32.703 "zone_management": false, 00:12:32.703 "zone_append": false, 00:12:32.703 "compare": false, 00:12:32.703 "compare_and_write": false, 00:12:32.703 "abort": true, 00:12:32.703 "seek_hole": false, 00:12:32.703 "seek_data": false, 00:12:32.703 "copy": true, 00:12:32.703 "nvme_iov_md": false 00:12:32.703 }, 00:12:32.703 "memory_domains": [ 00:12:32.703 { 00:12:32.703 "dma_device_id": "system", 00:12:32.703 "dma_device_type": 1 00:12:32.703 }, 00:12:32.703 { 00:12:32.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.703 "dma_device_type": 2 00:12:32.703 } 00:12:32.703 ], 00:12:32.703 "driver_specific": {} 00:12:32.703 } 00:12:32.703 ] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 BaseBdev3 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.703 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.963 [ 00:12:32.963 { 00:12:32.963 "name": "BaseBdev3", 00:12:32.963 "aliases": [ 00:12:32.963 "9e5e3ec0-a892-450b-aece-fd7871d07914" 00:12:32.963 ], 00:12:32.963 "product_name": "Malloc disk", 00:12:32.963 "block_size": 512, 00:12:32.963 "num_blocks": 65536, 00:12:32.963 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:32.963 "assigned_rate_limits": { 00:12:32.963 "rw_ios_per_sec": 0, 00:12:32.963 "rw_mbytes_per_sec": 0, 00:12:32.963 "r_mbytes_per_sec": 0, 00:12:32.963 "w_mbytes_per_sec": 0 00:12:32.963 }, 00:12:32.963 "claimed": false, 00:12:32.963 "zoned": false, 00:12:32.963 "supported_io_types": { 00:12:32.963 "read": true, 00:12:32.963 "write": true, 00:12:32.963 "unmap": true, 00:12:32.963 "flush": true, 00:12:32.963 "reset": true, 00:12:32.963 "nvme_admin": false, 00:12:32.963 "nvme_io": false, 00:12:32.963 "nvme_io_md": false, 00:12:32.963 "write_zeroes": true, 00:12:32.963 "zcopy": true, 00:12:32.963 "get_zone_info": false, 00:12:32.963 "zone_management": false, 00:12:32.963 "zone_append": false, 00:12:32.963 "compare": false, 00:12:32.963 "compare_and_write": false, 00:12:32.963 "abort": true, 00:12:32.963 "seek_hole": false, 00:12:32.963 "seek_data": false, 00:12:32.963 "copy": true, 00:12:32.963 "nvme_iov_md": false 00:12:32.963 }, 00:12:32.963 "memory_domains": [ 00:12:32.963 { 00:12:32.963 "dma_device_id": "system", 00:12:32.963 "dma_device_type": 1 00:12:32.963 }, 00:12:32.963 { 00:12:32.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.963 "dma_device_type": 2 00:12:32.963 } 00:12:32.963 ], 00:12:32.963 "driver_specific": {} 00:12:32.963 } 00:12:32.963 ] 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.963 [2024-11-20 07:09:14.987641] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.963 [2024-11-20 07:09:14.987754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.963 [2024-11-20 07:09:14.987820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.963 [2024-11-20 07:09:14.990023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.963 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.963 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.963 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.963 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.963 "name": "Existed_Raid", 00:12:32.963 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:32.963 "strip_size_kb": 64, 00:12:32.963 "state": "configuring", 00:12:32.963 "raid_level": "concat", 00:12:32.963 "superblock": true, 00:12:32.963 "num_base_bdevs": 3, 00:12:32.963 "num_base_bdevs_discovered": 2, 00:12:32.963 "num_base_bdevs_operational": 3, 00:12:32.963 "base_bdevs_list": [ 00:12:32.963 { 00:12:32.963 "name": "BaseBdev1", 00:12:32.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.963 "is_configured": false, 00:12:32.963 "data_offset": 0, 00:12:32.963 "data_size": 0 00:12:32.963 }, 00:12:32.963 { 00:12:32.963 "name": "BaseBdev2", 00:12:32.963 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:32.963 "is_configured": true, 00:12:32.963 "data_offset": 2048, 00:12:32.963 "data_size": 63488 00:12:32.963 }, 00:12:32.963 { 00:12:32.963 "name": "BaseBdev3", 00:12:32.963 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:32.963 "is_configured": true, 00:12:32.963 "data_offset": 2048, 00:12:32.963 "data_size": 63488 00:12:32.963 } 00:12:32.963 ] 00:12:32.963 }' 00:12:32.963 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.963 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 [2024-11-20 07:09:15.446831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.482 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.482 "name": "Existed_Raid", 00:12:33.482 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:33.482 "strip_size_kb": 64, 00:12:33.482 "state": "configuring", 00:12:33.482 "raid_level": "concat", 00:12:33.482 "superblock": true, 00:12:33.482 "num_base_bdevs": 3, 00:12:33.482 "num_base_bdevs_discovered": 1, 00:12:33.482 "num_base_bdevs_operational": 3, 00:12:33.482 "base_bdevs_list": [ 00:12:33.482 { 00:12:33.482 "name": "BaseBdev1", 00:12:33.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.482 "is_configured": false, 00:12:33.482 "data_offset": 0, 00:12:33.482 "data_size": 0 00:12:33.482 }, 00:12:33.482 { 00:12:33.482 "name": null, 00:12:33.482 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:33.482 "is_configured": false, 00:12:33.482 "data_offset": 0, 00:12:33.482 "data_size": 63488 00:12:33.482 }, 00:12:33.482 { 00:12:33.482 "name": "BaseBdev3", 00:12:33.482 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:33.482 "is_configured": true, 00:12:33.482 "data_offset": 2048, 00:12:33.482 "data_size": 63488 00:12:33.482 } 00:12:33.482 ] 00:12:33.482 }' 00:12:33.482 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.482 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.741 [2024-11-20 07:09:15.917368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.741 BaseBdev1 00:12:33.741 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.742 [ 00:12:33.742 { 00:12:33.742 "name": "BaseBdev1", 00:12:33.742 "aliases": [ 00:12:33.742 "1fad750d-9b63-4b7d-aefc-64de7da2c10e" 00:12:33.742 ], 00:12:33.742 "product_name": "Malloc disk", 00:12:33.742 "block_size": 512, 00:12:33.742 "num_blocks": 65536, 00:12:33.742 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:33.742 "assigned_rate_limits": { 00:12:33.742 "rw_ios_per_sec": 0, 00:12:33.742 "rw_mbytes_per_sec": 0, 00:12:33.742 "r_mbytes_per_sec": 0, 00:12:33.742 "w_mbytes_per_sec": 0 00:12:33.742 }, 00:12:33.742 "claimed": true, 00:12:33.742 "claim_type": "exclusive_write", 00:12:33.742 "zoned": false, 00:12:33.742 "supported_io_types": { 00:12:33.742 "read": true, 00:12:33.742 "write": true, 00:12:33.742 "unmap": true, 00:12:33.742 "flush": true, 00:12:33.742 "reset": true, 00:12:33.742 "nvme_admin": false, 00:12:33.742 "nvme_io": false, 00:12:33.742 "nvme_io_md": false, 00:12:33.742 "write_zeroes": true, 00:12:33.742 "zcopy": true, 00:12:33.742 "get_zone_info": false, 00:12:33.742 "zone_management": false, 00:12:33.742 "zone_append": false, 00:12:33.742 "compare": false, 00:12:33.742 "compare_and_write": false, 00:12:33.742 "abort": true, 00:12:33.742 "seek_hole": false, 00:12:33.742 "seek_data": false, 00:12:33.742 "copy": true, 00:12:33.742 "nvme_iov_md": false 00:12:33.742 }, 00:12:33.742 "memory_domains": [ 00:12:33.742 { 00:12:33.742 "dma_device_id": "system", 00:12:33.742 "dma_device_type": 1 00:12:33.742 }, 00:12:33.742 { 00:12:33.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.742 "dma_device_type": 2 00:12:33.742 } 00:12:33.742 ], 00:12:33.742 "driver_specific": {} 00:12:33.742 } 00:12:33.742 ] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.742 "name": "Existed_Raid", 00:12:33.742 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:33.742 "strip_size_kb": 64, 00:12:33.742 "state": "configuring", 00:12:33.742 "raid_level": "concat", 00:12:33.742 "superblock": true, 00:12:33.742 "num_base_bdevs": 3, 00:12:33.742 "num_base_bdevs_discovered": 2, 00:12:33.742 "num_base_bdevs_operational": 3, 00:12:33.742 "base_bdevs_list": [ 00:12:33.742 { 00:12:33.742 "name": "BaseBdev1", 00:12:33.742 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:33.742 "is_configured": true, 00:12:33.742 "data_offset": 2048, 00:12:33.742 "data_size": 63488 00:12:33.742 }, 00:12:33.742 { 00:12:33.742 "name": null, 00:12:33.742 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:33.742 "is_configured": false, 00:12:33.742 "data_offset": 0, 00:12:33.742 "data_size": 63488 00:12:33.742 }, 00:12:33.742 { 00:12:33.742 "name": "BaseBdev3", 00:12:33.742 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:33.742 "is_configured": true, 00:12:33.742 "data_offset": 2048, 00:12:33.742 "data_size": 63488 00:12:33.742 } 00:12:33.742 ] 00:12:33.742 }' 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.742 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.314 [2024-11-20 07:09:16.356705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:34.314 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.315 "name": "Existed_Raid", 00:12:34.315 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:34.315 "strip_size_kb": 64, 00:12:34.315 "state": "configuring", 00:12:34.315 "raid_level": "concat", 00:12:34.315 "superblock": true, 00:12:34.315 "num_base_bdevs": 3, 00:12:34.315 "num_base_bdevs_discovered": 1, 00:12:34.315 "num_base_bdevs_operational": 3, 00:12:34.315 "base_bdevs_list": [ 00:12:34.315 { 00:12:34.315 "name": "BaseBdev1", 00:12:34.315 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:34.315 "is_configured": true, 00:12:34.315 "data_offset": 2048, 00:12:34.315 "data_size": 63488 00:12:34.315 }, 00:12:34.315 { 00:12:34.315 "name": null, 00:12:34.315 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:34.315 "is_configured": false, 00:12:34.315 "data_offset": 0, 00:12:34.315 "data_size": 63488 00:12:34.315 }, 00:12:34.315 { 00:12:34.315 "name": null, 00:12:34.315 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:34.315 "is_configured": false, 00:12:34.315 "data_offset": 0, 00:12:34.315 "data_size": 63488 00:12:34.315 } 00:12:34.315 ] 00:12:34.315 }' 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.315 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.883 [2024-11-20 07:09:16.899847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.883 "name": "Existed_Raid", 00:12:34.883 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:34.883 "strip_size_kb": 64, 00:12:34.883 "state": "configuring", 00:12:34.883 "raid_level": "concat", 00:12:34.883 "superblock": true, 00:12:34.883 "num_base_bdevs": 3, 00:12:34.883 "num_base_bdevs_discovered": 2, 00:12:34.883 "num_base_bdevs_operational": 3, 00:12:34.883 "base_bdevs_list": [ 00:12:34.883 { 00:12:34.883 "name": "BaseBdev1", 00:12:34.883 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:34.883 "is_configured": true, 00:12:34.883 "data_offset": 2048, 00:12:34.883 "data_size": 63488 00:12:34.883 }, 00:12:34.883 { 00:12:34.883 "name": null, 00:12:34.883 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:34.883 "is_configured": false, 00:12:34.883 "data_offset": 0, 00:12:34.883 "data_size": 63488 00:12:34.883 }, 00:12:34.883 { 00:12:34.883 "name": "BaseBdev3", 00:12:34.883 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:34.883 "is_configured": true, 00:12:34.883 "data_offset": 2048, 00:12:34.883 "data_size": 63488 00:12:34.883 } 00:12:34.883 ] 00:12:34.883 }' 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.883 07:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 [2024-11-20 07:09:17.383061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.401 "name": "Existed_Raid", 00:12:35.401 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:35.401 "strip_size_kb": 64, 00:12:35.401 "state": "configuring", 00:12:35.401 "raid_level": "concat", 00:12:35.401 "superblock": true, 00:12:35.401 "num_base_bdevs": 3, 00:12:35.401 "num_base_bdevs_discovered": 1, 00:12:35.401 "num_base_bdevs_operational": 3, 00:12:35.401 "base_bdevs_list": [ 00:12:35.401 { 00:12:35.401 "name": null, 00:12:35.401 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:35.401 "is_configured": false, 00:12:35.401 "data_offset": 0, 00:12:35.401 "data_size": 63488 00:12:35.401 }, 00:12:35.401 { 00:12:35.401 "name": null, 00:12:35.401 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:35.401 "is_configured": false, 00:12:35.401 "data_offset": 0, 00:12:35.401 "data_size": 63488 00:12:35.401 }, 00:12:35.401 { 00:12:35.401 "name": "BaseBdev3", 00:12:35.401 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:35.401 "is_configured": true, 00:12:35.401 "data_offset": 2048, 00:12:35.401 "data_size": 63488 00:12:35.401 } 00:12:35.401 ] 00:12:35.401 }' 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.401 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.969 [2024-11-20 07:09:17.993727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.969 07:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.969 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.969 "name": "Existed_Raid", 00:12:35.969 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:35.969 "strip_size_kb": 64, 00:12:35.969 "state": "configuring", 00:12:35.969 "raid_level": "concat", 00:12:35.969 "superblock": true, 00:12:35.969 "num_base_bdevs": 3, 00:12:35.969 "num_base_bdevs_discovered": 2, 00:12:35.969 "num_base_bdevs_operational": 3, 00:12:35.969 "base_bdevs_list": [ 00:12:35.969 { 00:12:35.969 "name": null, 00:12:35.969 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:35.969 "is_configured": false, 00:12:35.969 "data_offset": 0, 00:12:35.969 "data_size": 63488 00:12:35.969 }, 00:12:35.969 { 00:12:35.969 "name": "BaseBdev2", 00:12:35.969 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:35.969 "is_configured": true, 00:12:35.969 "data_offset": 2048, 00:12:35.969 "data_size": 63488 00:12:35.969 }, 00:12:35.969 { 00:12:35.969 "name": "BaseBdev3", 00:12:35.969 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:35.969 "is_configured": true, 00:12:35.969 "data_offset": 2048, 00:12:35.970 "data_size": 63488 00:12:35.970 } 00:12:35.970 ] 00:12:35.970 }' 00:12:35.970 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.970 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.229 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.229 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.229 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.229 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.229 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1fad750d-9b63-4b7d-aefc-64de7da2c10e 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.488 [2024-11-20 07:09:18.594801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:36.488 [2024-11-20 07:09:18.595141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:36.488 NewBaseBdev 00:12:36.488 [2024-11-20 07:09:18.595201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:36.488 [2024-11-20 07:09:18.595502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:36.488 [2024-11-20 07:09:18.595663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:36.488 [2024-11-20 07:09:18.595675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:36.488 [2024-11-20 07:09:18.595821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.488 [ 00:12:36.488 { 00:12:36.488 "name": "NewBaseBdev", 00:12:36.488 "aliases": [ 00:12:36.488 "1fad750d-9b63-4b7d-aefc-64de7da2c10e" 00:12:36.488 ], 00:12:36.488 "product_name": "Malloc disk", 00:12:36.488 "block_size": 512, 00:12:36.488 "num_blocks": 65536, 00:12:36.488 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:36.488 "assigned_rate_limits": { 00:12:36.488 "rw_ios_per_sec": 0, 00:12:36.488 "rw_mbytes_per_sec": 0, 00:12:36.488 "r_mbytes_per_sec": 0, 00:12:36.488 "w_mbytes_per_sec": 0 00:12:36.488 }, 00:12:36.488 "claimed": true, 00:12:36.488 "claim_type": "exclusive_write", 00:12:36.488 "zoned": false, 00:12:36.488 "supported_io_types": { 00:12:36.488 "read": true, 00:12:36.488 "write": true, 00:12:36.488 "unmap": true, 00:12:36.488 "flush": true, 00:12:36.488 "reset": true, 00:12:36.488 "nvme_admin": false, 00:12:36.488 "nvme_io": false, 00:12:36.488 "nvme_io_md": false, 00:12:36.488 "write_zeroes": true, 00:12:36.488 "zcopy": true, 00:12:36.488 "get_zone_info": false, 00:12:36.488 "zone_management": false, 00:12:36.488 "zone_append": false, 00:12:36.488 "compare": false, 00:12:36.488 "compare_and_write": false, 00:12:36.488 "abort": true, 00:12:36.488 "seek_hole": false, 00:12:36.488 "seek_data": false, 00:12:36.488 "copy": true, 00:12:36.488 "nvme_iov_md": false 00:12:36.488 }, 00:12:36.488 "memory_domains": [ 00:12:36.488 { 00:12:36.488 "dma_device_id": "system", 00:12:36.488 "dma_device_type": 1 00:12:36.488 }, 00:12:36.488 { 00:12:36.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.488 "dma_device_type": 2 00:12:36.488 } 00:12:36.488 ], 00:12:36.488 "driver_specific": {} 00:12:36.488 } 00:12:36.488 ] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.488 "name": "Existed_Raid", 00:12:36.488 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:36.488 "strip_size_kb": 64, 00:12:36.488 "state": "online", 00:12:36.488 "raid_level": "concat", 00:12:36.488 "superblock": true, 00:12:36.488 "num_base_bdevs": 3, 00:12:36.488 "num_base_bdevs_discovered": 3, 00:12:36.488 "num_base_bdevs_operational": 3, 00:12:36.488 "base_bdevs_list": [ 00:12:36.488 { 00:12:36.488 "name": "NewBaseBdev", 00:12:36.488 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 2048, 00:12:36.488 "data_size": 63488 00:12:36.488 }, 00:12:36.488 { 00:12:36.488 "name": "BaseBdev2", 00:12:36.488 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 2048, 00:12:36.488 "data_size": 63488 00:12:36.488 }, 00:12:36.488 { 00:12:36.488 "name": "BaseBdev3", 00:12:36.488 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:36.488 "is_configured": true, 00:12:36.488 "data_offset": 2048, 00:12:36.488 "data_size": 63488 00:12:36.488 } 00:12:36.488 ] 00:12:36.488 }' 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.488 07:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.055 [2024-11-20 07:09:19.110346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.055 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.055 "name": "Existed_Raid", 00:12:37.055 "aliases": [ 00:12:37.055 "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95" 00:12:37.055 ], 00:12:37.055 "product_name": "Raid Volume", 00:12:37.055 "block_size": 512, 00:12:37.055 "num_blocks": 190464, 00:12:37.055 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:37.055 "assigned_rate_limits": { 00:12:37.055 "rw_ios_per_sec": 0, 00:12:37.055 "rw_mbytes_per_sec": 0, 00:12:37.055 "r_mbytes_per_sec": 0, 00:12:37.055 "w_mbytes_per_sec": 0 00:12:37.055 }, 00:12:37.055 "claimed": false, 00:12:37.055 "zoned": false, 00:12:37.055 "supported_io_types": { 00:12:37.055 "read": true, 00:12:37.055 "write": true, 00:12:37.055 "unmap": true, 00:12:37.055 "flush": true, 00:12:37.055 "reset": true, 00:12:37.055 "nvme_admin": false, 00:12:37.055 "nvme_io": false, 00:12:37.055 "nvme_io_md": false, 00:12:37.055 "write_zeroes": true, 00:12:37.055 "zcopy": false, 00:12:37.055 "get_zone_info": false, 00:12:37.055 "zone_management": false, 00:12:37.055 "zone_append": false, 00:12:37.056 "compare": false, 00:12:37.056 "compare_and_write": false, 00:12:37.056 "abort": false, 00:12:37.056 "seek_hole": false, 00:12:37.056 "seek_data": false, 00:12:37.056 "copy": false, 00:12:37.056 "nvme_iov_md": false 00:12:37.056 }, 00:12:37.056 "memory_domains": [ 00:12:37.056 { 00:12:37.056 "dma_device_id": "system", 00:12:37.056 "dma_device_type": 1 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.056 "dma_device_type": 2 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "dma_device_id": "system", 00:12:37.056 "dma_device_type": 1 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.056 "dma_device_type": 2 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "dma_device_id": "system", 00:12:37.056 "dma_device_type": 1 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.056 "dma_device_type": 2 00:12:37.056 } 00:12:37.056 ], 00:12:37.056 "driver_specific": { 00:12:37.056 "raid": { 00:12:37.056 "uuid": "8ceca78f-d0ee-4ddb-b6aa-ded38b2b6f95", 00:12:37.056 "strip_size_kb": 64, 00:12:37.056 "state": "online", 00:12:37.056 "raid_level": "concat", 00:12:37.056 "superblock": true, 00:12:37.056 "num_base_bdevs": 3, 00:12:37.056 "num_base_bdevs_discovered": 3, 00:12:37.056 "num_base_bdevs_operational": 3, 00:12:37.056 "base_bdevs_list": [ 00:12:37.056 { 00:12:37.056 "name": "NewBaseBdev", 00:12:37.056 "uuid": "1fad750d-9b63-4b7d-aefc-64de7da2c10e", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 2048, 00:12:37.056 "data_size": 63488 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "name": "BaseBdev2", 00:12:37.056 "uuid": "2facedad-1f47-4ff8-8926-7d507707167e", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 2048, 00:12:37.056 "data_size": 63488 00:12:37.056 }, 00:12:37.056 { 00:12:37.056 "name": "BaseBdev3", 00:12:37.056 "uuid": "9e5e3ec0-a892-450b-aece-fd7871d07914", 00:12:37.056 "is_configured": true, 00:12:37.056 "data_offset": 2048, 00:12:37.056 "data_size": 63488 00:12:37.056 } 00:12:37.056 ] 00:12:37.056 } 00:12:37.056 } 00:12:37.056 }' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:37.056 BaseBdev2 00:12:37.056 BaseBdev3' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.056 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.315 [2024-11-20 07:09:19.405491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.315 [2024-11-20 07:09:19.405521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.315 [2024-11-20 07:09:19.405614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.315 [2024-11-20 07:09:19.405674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.315 [2024-11-20 07:09:19.405687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66537 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66537 ']' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66537 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66537 00:12:37.315 killing process with pid 66537 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66537' 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66537 00:12:37.315 [2024-11-20 07:09:19.452299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.315 07:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66537 00:12:37.574 [2024-11-20 07:09:19.780893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.952 07:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:38.952 00:12:38.952 real 0m10.800s 00:12:38.952 user 0m17.080s 00:12:38.952 sys 0m1.817s 00:12:38.952 07:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.952 ************************************ 00:12:38.952 END TEST raid_state_function_test_sb 00:12:38.952 ************************************ 00:12:38.952 07:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.952 07:09:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:38.952 07:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.952 07:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.952 07:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.952 ************************************ 00:12:38.952 START TEST raid_superblock_test 00:12:38.952 ************************************ 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67158 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67158 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67158 ']' 00:12:38.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.952 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.953 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.953 [2024-11-20 07:09:21.162181] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:38.953 [2024-11-20 07:09:21.162446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67158 ] 00:12:39.212 [2024-11-20 07:09:21.320308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.212 [2024-11-20 07:09:21.453380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.471 [2024-11-20 07:09:21.667511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.471 [2024-11-20 07:09:21.667567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.037 malloc1 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.037 [2024-11-20 07:09:22.166320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.037 [2024-11-20 07:09:22.166497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.037 [2024-11-20 07:09:22.166551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.037 [2024-11-20 07:09:22.166598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.037 [2024-11-20 07:09:22.169082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.037 [2024-11-20 07:09:22.169167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.037 pt1 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.037 malloc2 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.037 [2024-11-20 07:09:22.230895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.037 [2024-11-20 07:09:22.231020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.037 [2024-11-20 07:09:22.231076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.037 [2024-11-20 07:09:22.231110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.037 [2024-11-20 07:09:22.233541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.037 [2024-11-20 07:09:22.233629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.037 pt2 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.037 malloc3 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.037 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.296 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.296 [2024-11-20 07:09:22.305739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.296 [2024-11-20 07:09:22.305899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.296 [2024-11-20 07:09:22.305964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.296 [2024-11-20 07:09:22.306008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.297 [2024-11-20 07:09:22.308798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.297 [2024-11-20 07:09:22.308895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.297 pt3 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.297 [2024-11-20 07:09:22.317826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.297 [2024-11-20 07:09:22.320020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.297 [2024-11-20 07:09:22.320152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.297 [2024-11-20 07:09:22.320411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.297 [2024-11-20 07:09:22.320470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:40.297 [2024-11-20 07:09:22.320833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:40.297 [2024-11-20 07:09:22.321091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.297 [2024-11-20 07:09:22.321141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.297 [2024-11-20 07:09:22.321417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.297 "name": "raid_bdev1", 00:12:40.297 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:40.297 "strip_size_kb": 64, 00:12:40.297 "state": "online", 00:12:40.297 "raid_level": "concat", 00:12:40.297 "superblock": true, 00:12:40.297 "num_base_bdevs": 3, 00:12:40.297 "num_base_bdevs_discovered": 3, 00:12:40.297 "num_base_bdevs_operational": 3, 00:12:40.297 "base_bdevs_list": [ 00:12:40.297 { 00:12:40.297 "name": "pt1", 00:12:40.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.297 "is_configured": true, 00:12:40.297 "data_offset": 2048, 00:12:40.297 "data_size": 63488 00:12:40.297 }, 00:12:40.297 { 00:12:40.297 "name": "pt2", 00:12:40.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.297 "is_configured": true, 00:12:40.297 "data_offset": 2048, 00:12:40.297 "data_size": 63488 00:12:40.297 }, 00:12:40.297 { 00:12:40.297 "name": "pt3", 00:12:40.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.297 "is_configured": true, 00:12:40.297 "data_offset": 2048, 00:12:40.297 "data_size": 63488 00:12:40.297 } 00:12:40.297 ] 00:12:40.297 }' 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.297 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.555 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.556 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.556 [2024-11-20 07:09:22.809391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.814 "name": "raid_bdev1", 00:12:40.814 "aliases": [ 00:12:40.814 "95055a30-469a-4b57-a788-2f72684c26a9" 00:12:40.814 ], 00:12:40.814 "product_name": "Raid Volume", 00:12:40.814 "block_size": 512, 00:12:40.814 "num_blocks": 190464, 00:12:40.814 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:40.814 "assigned_rate_limits": { 00:12:40.814 "rw_ios_per_sec": 0, 00:12:40.814 "rw_mbytes_per_sec": 0, 00:12:40.814 "r_mbytes_per_sec": 0, 00:12:40.814 "w_mbytes_per_sec": 0 00:12:40.814 }, 00:12:40.814 "claimed": false, 00:12:40.814 "zoned": false, 00:12:40.814 "supported_io_types": { 00:12:40.814 "read": true, 00:12:40.814 "write": true, 00:12:40.814 "unmap": true, 00:12:40.814 "flush": true, 00:12:40.814 "reset": true, 00:12:40.814 "nvme_admin": false, 00:12:40.814 "nvme_io": false, 00:12:40.814 "nvme_io_md": false, 00:12:40.814 "write_zeroes": true, 00:12:40.814 "zcopy": false, 00:12:40.814 "get_zone_info": false, 00:12:40.814 "zone_management": false, 00:12:40.814 "zone_append": false, 00:12:40.814 "compare": false, 00:12:40.814 "compare_and_write": false, 00:12:40.814 "abort": false, 00:12:40.814 "seek_hole": false, 00:12:40.814 "seek_data": false, 00:12:40.814 "copy": false, 00:12:40.814 "nvme_iov_md": false 00:12:40.814 }, 00:12:40.814 "memory_domains": [ 00:12:40.814 { 00:12:40.814 "dma_device_id": "system", 00:12:40.814 "dma_device_type": 1 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.814 "dma_device_type": 2 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "dma_device_id": "system", 00:12:40.814 "dma_device_type": 1 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.814 "dma_device_type": 2 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "dma_device_id": "system", 00:12:40.814 "dma_device_type": 1 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.814 "dma_device_type": 2 00:12:40.814 } 00:12:40.814 ], 00:12:40.814 "driver_specific": { 00:12:40.814 "raid": { 00:12:40.814 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:40.814 "strip_size_kb": 64, 00:12:40.814 "state": "online", 00:12:40.814 "raid_level": "concat", 00:12:40.814 "superblock": true, 00:12:40.814 "num_base_bdevs": 3, 00:12:40.814 "num_base_bdevs_discovered": 3, 00:12:40.814 "num_base_bdevs_operational": 3, 00:12:40.814 "base_bdevs_list": [ 00:12:40.814 { 00:12:40.814 "name": "pt1", 00:12:40.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.814 "is_configured": true, 00:12:40.814 "data_offset": 2048, 00:12:40.814 "data_size": 63488 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "name": "pt2", 00:12:40.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.814 "is_configured": true, 00:12:40.814 "data_offset": 2048, 00:12:40.814 "data_size": 63488 00:12:40.814 }, 00:12:40.814 { 00:12:40.814 "name": "pt3", 00:12:40.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.814 "is_configured": true, 00:12:40.814 "data_offset": 2048, 00:12:40.814 "data_size": 63488 00:12:40.814 } 00:12:40.814 ] 00:12:40.814 } 00:12:40.814 } 00:12:40.814 }' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:40.814 pt2 00:12:40.814 pt3' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.814 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.815 07:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.815 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.074 [2024-11-20 07:09:23.096809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95055a30-469a-4b57-a788-2f72684c26a9 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 95055a30-469a-4b57-a788-2f72684c26a9 ']' 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.074 [2024-11-20 07:09:23.144436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.074 [2024-11-20 07:09:23.144510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.074 [2024-11-20 07:09:23.144644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.074 [2024-11-20 07:09:23.144740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.074 [2024-11-20 07:09:23.144793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.074 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 [2024-11-20 07:09:23.300251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:41.075 [2024-11-20 07:09:23.302445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:41.075 [2024-11-20 07:09:23.302557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:41.075 [2024-11-20 07:09:23.302653] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:41.075 [2024-11-20 07:09:23.302760] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:41.075 [2024-11-20 07:09:23.302828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:41.075 [2024-11-20 07:09:23.302887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.075 [2024-11-20 07:09:23.302923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:41.075 request: 00:12:41.075 { 00:12:41.075 "name": "raid_bdev1", 00:12:41.075 "raid_level": "concat", 00:12:41.075 "base_bdevs": [ 00:12:41.075 "malloc1", 00:12:41.075 "malloc2", 00:12:41.075 "malloc3" 00:12:41.075 ], 00:12:41.075 "strip_size_kb": 64, 00:12:41.075 "superblock": false, 00:12:41.075 "method": "bdev_raid_create", 00:12:41.075 "req_id": 1 00:12:41.075 } 00:12:41.075 Got JSON-RPC error response 00:12:41.075 response: 00:12:41.075 { 00:12:41.075 "code": -17, 00:12:41.075 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:41.075 } 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.075 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.332 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:41.332 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:41.332 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.332 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.332 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.332 [2024-11-20 07:09:23.360065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.332 [2024-11-20 07:09:23.360132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.332 [2024-11-20 07:09:23.360155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:41.332 [2024-11-20 07:09:23.360165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.332 [2024-11-20 07:09:23.362742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.332 [2024-11-20 07:09:23.362783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.332 [2024-11-20 07:09:23.362888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.332 [2024-11-20 07:09:23.362956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.332 pt1 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.333 "name": "raid_bdev1", 00:12:41.333 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:41.333 "strip_size_kb": 64, 00:12:41.333 "state": "configuring", 00:12:41.333 "raid_level": "concat", 00:12:41.333 "superblock": true, 00:12:41.333 "num_base_bdevs": 3, 00:12:41.333 "num_base_bdevs_discovered": 1, 00:12:41.333 "num_base_bdevs_operational": 3, 00:12:41.333 "base_bdevs_list": [ 00:12:41.333 { 00:12:41.333 "name": "pt1", 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.333 "is_configured": true, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 }, 00:12:41.333 { 00:12:41.333 "name": null, 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.333 "is_configured": false, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 }, 00:12:41.333 { 00:12:41.333 "name": null, 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.333 "is_configured": false, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 } 00:12:41.333 ] 00:12:41.333 }' 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.333 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.590 [2024-11-20 07:09:23.799411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.590 [2024-11-20 07:09:23.799556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.590 [2024-11-20 07:09:23.799602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:41.590 [2024-11-20 07:09:23.799638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.590 [2024-11-20 07:09:23.800144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.590 [2024-11-20 07:09:23.800206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.590 [2024-11-20 07:09:23.800351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:41.590 [2024-11-20 07:09:23.800407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.590 pt2 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.590 [2024-11-20 07:09:23.811411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.590 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.591 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.849 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.849 "name": "raid_bdev1", 00:12:41.849 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:41.849 "strip_size_kb": 64, 00:12:41.849 "state": "configuring", 00:12:41.849 "raid_level": "concat", 00:12:41.849 "superblock": true, 00:12:41.849 "num_base_bdevs": 3, 00:12:41.849 "num_base_bdevs_discovered": 1, 00:12:41.849 "num_base_bdevs_operational": 3, 00:12:41.849 "base_bdevs_list": [ 00:12:41.849 { 00:12:41.849 "name": "pt1", 00:12:41.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.849 "is_configured": true, 00:12:41.849 "data_offset": 2048, 00:12:41.849 "data_size": 63488 00:12:41.849 }, 00:12:41.849 { 00:12:41.849 "name": null, 00:12:41.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.849 "is_configured": false, 00:12:41.849 "data_offset": 0, 00:12:41.849 "data_size": 63488 00:12:41.849 }, 00:12:41.849 { 00:12:41.849 "name": null, 00:12:41.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.849 "is_configured": false, 00:12:41.849 "data_offset": 2048, 00:12:41.849 "data_size": 63488 00:12:41.849 } 00:12:41.849 ] 00:12:41.849 }' 00:12:41.849 07:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.849 07:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.109 [2024-11-20 07:09:24.322504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.109 [2024-11-20 07:09:24.322573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.109 [2024-11-20 07:09:24.322592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:42.109 [2024-11-20 07:09:24.322604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.109 [2024-11-20 07:09:24.323071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.109 [2024-11-20 07:09:24.323094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.109 [2024-11-20 07:09:24.323178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:42.109 [2024-11-20 07:09:24.323202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.109 pt2 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.109 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 [2024-11-20 07:09:24.334498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.110 [2024-11-20 07:09:24.334585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.110 [2024-11-20 07:09:24.334603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:42.110 [2024-11-20 07:09:24.334614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.110 [2024-11-20 07:09:24.334996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.110 [2024-11-20 07:09:24.335018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.110 [2024-11-20 07:09:24.335085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:42.110 [2024-11-20 07:09:24.335105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.110 [2024-11-20 07:09:24.335223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:42.110 [2024-11-20 07:09:24.335234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:42.110 [2024-11-20 07:09:24.335518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:42.110 [2024-11-20 07:09:24.335677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:42.110 [2024-11-20 07:09:24.335687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:42.110 [2024-11-20 07:09:24.335847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.110 pt3 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.385 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.385 "name": "raid_bdev1", 00:12:42.385 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:42.385 "strip_size_kb": 64, 00:12:42.385 "state": "online", 00:12:42.385 "raid_level": "concat", 00:12:42.385 "superblock": true, 00:12:42.385 "num_base_bdevs": 3, 00:12:42.385 "num_base_bdevs_discovered": 3, 00:12:42.385 "num_base_bdevs_operational": 3, 00:12:42.385 "base_bdevs_list": [ 00:12:42.385 { 00:12:42.385 "name": "pt1", 00:12:42.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.385 "is_configured": true, 00:12:42.385 "data_offset": 2048, 00:12:42.385 "data_size": 63488 00:12:42.385 }, 00:12:42.385 { 00:12:42.385 "name": "pt2", 00:12:42.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.385 "is_configured": true, 00:12:42.385 "data_offset": 2048, 00:12:42.385 "data_size": 63488 00:12:42.385 }, 00:12:42.385 { 00:12:42.385 "name": "pt3", 00:12:42.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.385 "is_configured": true, 00:12:42.385 "data_offset": 2048, 00:12:42.385 "data_size": 63488 00:12:42.385 } 00:12:42.385 ] 00:12:42.385 }' 00:12:42.385 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.385 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.644 [2024-11-20 07:09:24.798147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.644 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.644 "name": "raid_bdev1", 00:12:42.644 "aliases": [ 00:12:42.644 "95055a30-469a-4b57-a788-2f72684c26a9" 00:12:42.644 ], 00:12:42.644 "product_name": "Raid Volume", 00:12:42.644 "block_size": 512, 00:12:42.644 "num_blocks": 190464, 00:12:42.644 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:42.644 "assigned_rate_limits": { 00:12:42.644 "rw_ios_per_sec": 0, 00:12:42.644 "rw_mbytes_per_sec": 0, 00:12:42.644 "r_mbytes_per_sec": 0, 00:12:42.644 "w_mbytes_per_sec": 0 00:12:42.644 }, 00:12:42.644 "claimed": false, 00:12:42.644 "zoned": false, 00:12:42.644 "supported_io_types": { 00:12:42.644 "read": true, 00:12:42.644 "write": true, 00:12:42.644 "unmap": true, 00:12:42.644 "flush": true, 00:12:42.644 "reset": true, 00:12:42.644 "nvme_admin": false, 00:12:42.644 "nvme_io": false, 00:12:42.644 "nvme_io_md": false, 00:12:42.644 "write_zeroes": true, 00:12:42.644 "zcopy": false, 00:12:42.644 "get_zone_info": false, 00:12:42.644 "zone_management": false, 00:12:42.644 "zone_append": false, 00:12:42.644 "compare": false, 00:12:42.644 "compare_and_write": false, 00:12:42.644 "abort": false, 00:12:42.644 "seek_hole": false, 00:12:42.644 "seek_data": false, 00:12:42.644 "copy": false, 00:12:42.644 "nvme_iov_md": false 00:12:42.644 }, 00:12:42.644 "memory_domains": [ 00:12:42.644 { 00:12:42.644 "dma_device_id": "system", 00:12:42.644 "dma_device_type": 1 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.644 "dma_device_type": 2 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "dma_device_id": "system", 00:12:42.644 "dma_device_type": 1 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.644 "dma_device_type": 2 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "dma_device_id": "system", 00:12:42.644 "dma_device_type": 1 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.644 "dma_device_type": 2 00:12:42.644 } 00:12:42.644 ], 00:12:42.644 "driver_specific": { 00:12:42.644 "raid": { 00:12:42.644 "uuid": "95055a30-469a-4b57-a788-2f72684c26a9", 00:12:42.644 "strip_size_kb": 64, 00:12:42.644 "state": "online", 00:12:42.644 "raid_level": "concat", 00:12:42.644 "superblock": true, 00:12:42.644 "num_base_bdevs": 3, 00:12:42.644 "num_base_bdevs_discovered": 3, 00:12:42.644 "num_base_bdevs_operational": 3, 00:12:42.644 "base_bdevs_list": [ 00:12:42.644 { 00:12:42.644 "name": "pt1", 00:12:42.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.644 "is_configured": true, 00:12:42.644 "data_offset": 2048, 00:12:42.644 "data_size": 63488 00:12:42.644 }, 00:12:42.645 { 00:12:42.645 "name": "pt2", 00:12:42.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.645 "is_configured": true, 00:12:42.645 "data_offset": 2048, 00:12:42.645 "data_size": 63488 00:12:42.645 }, 00:12:42.645 { 00:12:42.645 "name": "pt3", 00:12:42.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.645 "is_configured": true, 00:12:42.645 "data_offset": 2048, 00:12:42.645 "data_size": 63488 00:12:42.645 } 00:12:42.645 ] 00:12:42.645 } 00:12:42.645 } 00:12:42.645 }' 00:12:42.645 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.645 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:42.645 pt2 00:12:42.645 pt3' 00:12:42.645 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.904 07:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.904 [2024-11-20 07:09:25.081677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 95055a30-469a-4b57-a788-2f72684c26a9 '!=' 95055a30-469a-4b57-a788-2f72684c26a9 ']' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67158 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67158 ']' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67158 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.904 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67158 00:12:43.164 killing process with pid 67158 00:12:43.164 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.164 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.164 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67158' 00:12:43.164 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67158 00:12:43.164 [2024-11-20 07:09:25.167809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.164 [2024-11-20 07:09:25.167928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.164 07:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67158 00:12:43.164 [2024-11-20 07:09:25.167997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.164 [2024-11-20 07:09:25.168009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:43.422 [2024-11-20 07:09:25.498080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.802 07:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:44.802 00:12:44.802 real 0m5.616s 00:12:44.802 user 0m8.135s 00:12:44.802 sys 0m0.904s 00:12:44.802 07:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.802 07:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.802 ************************************ 00:12:44.802 END TEST raid_superblock_test 00:12:44.802 ************************************ 00:12:44.802 07:09:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:44.802 07:09:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:44.802 07:09:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.802 07:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.802 ************************************ 00:12:44.802 START TEST raid_read_error_test 00:12:44.802 ************************************ 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZYyv5mmmvO 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67422 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67422 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67422 ']' 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.802 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.802 [2024-11-20 07:09:26.828480] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:44.802 [2024-11-20 07:09:26.828674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67422 ] 00:12:44.802 [2024-11-20 07:09:27.005820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.061 [2024-11-20 07:09:27.132421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.319 [2024-11-20 07:09:27.367082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.319 [2024-11-20 07:09:27.367245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 BaseBdev1_malloc 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 true 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 [2024-11-20 07:09:27.819859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:45.577 [2024-11-20 07:09:27.819940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.577 [2024-11-20 07:09:27.819970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:45.577 [2024-11-20 07:09:27.819984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.577 [2024-11-20 07:09:27.822655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.577 [2024-11-20 07:09:27.822709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.577 BaseBdev1 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 BaseBdev2_malloc 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 true 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 [2024-11-20 07:09:27.891123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:45.835 [2024-11-20 07:09:27.891182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.835 [2024-11-20 07:09:27.891202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:45.835 [2024-11-20 07:09:27.891213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.835 [2024-11-20 07:09:27.893533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.835 [2024-11-20 07:09:27.893578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.835 BaseBdev2 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 BaseBdev3_malloc 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 true 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 [2024-11-20 07:09:27.974872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:45.835 [2024-11-20 07:09:27.974938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.835 [2024-11-20 07:09:27.974960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:45.835 [2024-11-20 07:09:27.974972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.835 [2024-11-20 07:09:27.977429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.835 [2024-11-20 07:09:27.977472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:45.835 BaseBdev3 00:12:45.835 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.836 [2024-11-20 07:09:27.986928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.836 [2024-11-20 07:09:27.988982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.836 [2024-11-20 07:09:27.989154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.836 [2024-11-20 07:09:27.989439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:45.836 [2024-11-20 07:09:27.989455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:45.836 [2024-11-20 07:09:27.989749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:45.836 [2024-11-20 07:09:27.989916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:45.836 [2024-11-20 07:09:27.989931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:45.836 [2024-11-20 07:09:27.990119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.836 07:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.836 07:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.836 07:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.836 "name": "raid_bdev1", 00:12:45.836 "uuid": "90229a45-3dab-4ddc-b59b-8ef06963c8c3", 00:12:45.836 "strip_size_kb": 64, 00:12:45.836 "state": "online", 00:12:45.836 "raid_level": "concat", 00:12:45.836 "superblock": true, 00:12:45.836 "num_base_bdevs": 3, 00:12:45.836 "num_base_bdevs_discovered": 3, 00:12:45.836 "num_base_bdevs_operational": 3, 00:12:45.836 "base_bdevs_list": [ 00:12:45.836 { 00:12:45.836 "name": "BaseBdev1", 00:12:45.836 "uuid": "5cb12a18-17b3-56cc-ac9b-7ba6f46e60bb", 00:12:45.836 "is_configured": true, 00:12:45.836 "data_offset": 2048, 00:12:45.836 "data_size": 63488 00:12:45.836 }, 00:12:45.836 { 00:12:45.836 "name": "BaseBdev2", 00:12:45.836 "uuid": "98254821-f669-5384-8677-396dff6f724c", 00:12:45.836 "is_configured": true, 00:12:45.836 "data_offset": 2048, 00:12:45.836 "data_size": 63488 00:12:45.836 }, 00:12:45.836 { 00:12:45.836 "name": "BaseBdev3", 00:12:45.836 "uuid": "f01a7209-66d6-5a07-a10b-16fad3f1aa8d", 00:12:45.836 "is_configured": true, 00:12:45.836 "data_offset": 2048, 00:12:45.836 "data_size": 63488 00:12:45.836 } 00:12:45.836 ] 00:12:45.836 }' 00:12:45.836 07:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.836 07:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.095 07:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:46.095 07:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:46.354 [2024-11-20 07:09:28.443539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.291 "name": "raid_bdev1", 00:12:47.291 "uuid": "90229a45-3dab-4ddc-b59b-8ef06963c8c3", 00:12:47.291 "strip_size_kb": 64, 00:12:47.291 "state": "online", 00:12:47.291 "raid_level": "concat", 00:12:47.291 "superblock": true, 00:12:47.291 "num_base_bdevs": 3, 00:12:47.291 "num_base_bdevs_discovered": 3, 00:12:47.291 "num_base_bdevs_operational": 3, 00:12:47.291 "base_bdevs_list": [ 00:12:47.291 { 00:12:47.291 "name": "BaseBdev1", 00:12:47.291 "uuid": "5cb12a18-17b3-56cc-ac9b-7ba6f46e60bb", 00:12:47.291 "is_configured": true, 00:12:47.291 "data_offset": 2048, 00:12:47.291 "data_size": 63488 00:12:47.291 }, 00:12:47.291 { 00:12:47.291 "name": "BaseBdev2", 00:12:47.291 "uuid": "98254821-f669-5384-8677-396dff6f724c", 00:12:47.291 "is_configured": true, 00:12:47.291 "data_offset": 2048, 00:12:47.291 "data_size": 63488 00:12:47.291 }, 00:12:47.291 { 00:12:47.291 "name": "BaseBdev3", 00:12:47.291 "uuid": "f01a7209-66d6-5a07-a10b-16fad3f1aa8d", 00:12:47.291 "is_configured": true, 00:12:47.291 "data_offset": 2048, 00:12:47.291 "data_size": 63488 00:12:47.291 } 00:12:47.291 ] 00:12:47.291 }' 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.291 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 [2024-11-20 07:09:29.804150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.550 [2024-11-20 07:09:29.804183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.550 [2024-11-20 07:09:29.807247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.550 [2024-11-20 07:09:29.807299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.550 [2024-11-20 07:09:29.807349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.550 [2024-11-20 07:09:29.807362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:47.550 { 00:12:47.550 "results": [ 00:12:47.550 { 00:12:47.550 "job": "raid_bdev1", 00:12:47.550 "core_mask": "0x1", 00:12:47.550 "workload": "randrw", 00:12:47.550 "percentage": 50, 00:12:47.550 "status": "finished", 00:12:47.550 "queue_depth": 1, 00:12:47.550 "io_size": 131072, 00:12:47.550 "runtime": 1.361127, 00:12:47.550 "iops": 14396.893162798182, 00:12:47.550 "mibps": 1799.6116453497727, 00:12:47.550 "io_failed": 1, 00:12:47.550 "io_timeout": 0, 00:12:47.550 "avg_latency_us": 96.43981564774754, 00:12:47.550 "min_latency_us": 28.05938864628821, 00:12:47.550 "max_latency_us": 1488.1537117903931 00:12:47.550 } 00:12:47.550 ], 00:12:47.550 "core_count": 1 00:12:47.550 } 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67422 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67422 ']' 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67422 00:12:47.550 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67422 00:12:47.809 killing process with pid 67422 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67422' 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67422 00:12:47.809 [2024-11-20 07:09:29.844436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.809 07:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67422 00:12:48.068 [2024-11-20 07:09:30.104999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZYyv5mmmvO 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:49.446 00:12:49.446 real 0m4.604s 00:12:49.446 user 0m5.458s 00:12:49.446 sys 0m0.529s 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.446 ************************************ 00:12:49.446 END TEST raid_read_error_test 00:12:49.446 ************************************ 00:12:49.446 07:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.446 07:09:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:49.446 07:09:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:49.446 07:09:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.446 07:09:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.446 ************************************ 00:12:49.446 START TEST raid_write_error_test 00:12:49.446 ************************************ 00:12:49.446 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:12:49.446 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j2k4m9ZYLj 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67562 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67562 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67562 ']' 00:12:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.447 07:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 [2024-11-20 07:09:31.517404] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:49.447 [2024-11-20 07:09:31.517535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67562 ] 00:12:49.447 [2024-11-20 07:09:31.693305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.706 [2024-11-20 07:09:31.816430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.994 [2024-11-20 07:09:32.028834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.994 [2024-11-20 07:09:32.028886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.254 BaseBdev1_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.254 true 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.254 [2024-11-20 07:09:32.457851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:50.254 [2024-11-20 07:09:32.457914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.254 [2024-11-20 07:09:32.457938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:50.254 [2024-11-20 07:09:32.457950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.254 [2024-11-20 07:09:32.460257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.254 [2024-11-20 07:09:32.460299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.254 BaseBdev1 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.254 BaseBdev2_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.254 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.515 true 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.515 [2024-11-20 07:09:32.528961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:50.515 [2024-11-20 07:09:32.529018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.515 [2024-11-20 07:09:32.529036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:50.515 [2024-11-20 07:09:32.529047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.515 [2024-11-20 07:09:32.531467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.515 [2024-11-20 07:09:32.531506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:50.515 BaseBdev2 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.515 BaseBdev3_malloc 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.515 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.515 true 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.516 [2024-11-20 07:09:32.610714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:50.516 [2024-11-20 07:09:32.610774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.516 [2024-11-20 07:09:32.610796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:50.516 [2024-11-20 07:09:32.610807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.516 [2024-11-20 07:09:32.613165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.516 [2024-11-20 07:09:32.613230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:50.516 BaseBdev3 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.516 [2024-11-20 07:09:32.622778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.516 [2024-11-20 07:09:32.624858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.516 [2024-11-20 07:09:32.624950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.516 [2024-11-20 07:09:32.625185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.516 [2024-11-20 07:09:32.625199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:50.516 [2024-11-20 07:09:32.625510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:50.516 [2024-11-20 07:09:32.625688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.516 [2024-11-20 07:09:32.625709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:50.516 [2024-11-20 07:09:32.625914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.516 "name": "raid_bdev1", 00:12:50.516 "uuid": "5dc61fe1-3a61-4b63-ac3e-0e912dfc2e42", 00:12:50.516 "strip_size_kb": 64, 00:12:50.516 "state": "online", 00:12:50.516 "raid_level": "concat", 00:12:50.516 "superblock": true, 00:12:50.516 "num_base_bdevs": 3, 00:12:50.516 "num_base_bdevs_discovered": 3, 00:12:50.516 "num_base_bdevs_operational": 3, 00:12:50.516 "base_bdevs_list": [ 00:12:50.516 { 00:12:50.516 "name": "BaseBdev1", 00:12:50.516 "uuid": "92f0caeb-5290-509e-9357-d24f095f1b73", 00:12:50.516 "is_configured": true, 00:12:50.516 "data_offset": 2048, 00:12:50.516 "data_size": 63488 00:12:50.516 }, 00:12:50.516 { 00:12:50.516 "name": "BaseBdev2", 00:12:50.516 "uuid": "e5887adf-007d-5338-87b5-3769074ac4f3", 00:12:50.516 "is_configured": true, 00:12:50.516 "data_offset": 2048, 00:12:50.516 "data_size": 63488 00:12:50.516 }, 00:12:50.516 { 00:12:50.516 "name": "BaseBdev3", 00:12:50.516 "uuid": "1151f597-07bb-500a-a431-d0003d92e945", 00:12:50.516 "is_configured": true, 00:12:50.516 "data_offset": 2048, 00:12:50.516 "data_size": 63488 00:12:50.516 } 00:12:50.516 ] 00:12:50.516 }' 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.516 07:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.083 07:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:51.083 07:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.083 [2024-11-20 07:09:33.199252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.020 "name": "raid_bdev1", 00:12:52.020 "uuid": "5dc61fe1-3a61-4b63-ac3e-0e912dfc2e42", 00:12:52.020 "strip_size_kb": 64, 00:12:52.020 "state": "online", 00:12:52.020 "raid_level": "concat", 00:12:52.020 "superblock": true, 00:12:52.020 "num_base_bdevs": 3, 00:12:52.020 "num_base_bdevs_discovered": 3, 00:12:52.020 "num_base_bdevs_operational": 3, 00:12:52.020 "base_bdevs_list": [ 00:12:52.020 { 00:12:52.020 "name": "BaseBdev1", 00:12:52.020 "uuid": "92f0caeb-5290-509e-9357-d24f095f1b73", 00:12:52.020 "is_configured": true, 00:12:52.020 "data_offset": 2048, 00:12:52.020 "data_size": 63488 00:12:52.020 }, 00:12:52.020 { 00:12:52.020 "name": "BaseBdev2", 00:12:52.020 "uuid": "e5887adf-007d-5338-87b5-3769074ac4f3", 00:12:52.020 "is_configured": true, 00:12:52.020 "data_offset": 2048, 00:12:52.020 "data_size": 63488 00:12:52.020 }, 00:12:52.020 { 00:12:52.020 "name": "BaseBdev3", 00:12:52.020 "uuid": "1151f597-07bb-500a-a431-d0003d92e945", 00:12:52.020 "is_configured": true, 00:12:52.020 "data_offset": 2048, 00:12:52.020 "data_size": 63488 00:12:52.020 } 00:12:52.020 ] 00:12:52.020 }' 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.020 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.588 [2024-11-20 07:09:34.551208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.588 [2024-11-20 07:09:34.551322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.588 [2024-11-20 07:09:34.554519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.588 [2024-11-20 07:09:34.554597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.588 [2024-11-20 07:09:34.554666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.588 [2024-11-20 07:09:34.554716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:52.588 { 00:12:52.588 "results": [ 00:12:52.588 { 00:12:52.588 "job": "raid_bdev1", 00:12:52.588 "core_mask": "0x1", 00:12:52.588 "workload": "randrw", 00:12:52.588 "percentage": 50, 00:12:52.588 "status": "finished", 00:12:52.588 "queue_depth": 1, 00:12:52.588 "io_size": 131072, 00:12:52.588 "runtime": 1.35271, 00:12:52.588 "iops": 14522.698878547508, 00:12:52.588 "mibps": 1815.3373598184385, 00:12:52.588 "io_failed": 1, 00:12:52.588 "io_timeout": 0, 00:12:52.588 "avg_latency_us": 95.50769493395546, 00:12:52.588 "min_latency_us": 26.941484716157206, 00:12:52.588 "max_latency_us": 1430.9170305676855 00:12:52.588 } 00:12:52.588 ], 00:12:52.588 "core_count": 1 00:12:52.588 } 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67562 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67562 ']' 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67562 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67562 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67562' 00:12:52.588 killing process with pid 67562 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67562 00:12:52.588 [2024-11-20 07:09:34.600704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.588 07:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67562 00:12:52.846 [2024-11-20 07:09:34.865532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j2k4m9ZYLj 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:54.224 00:12:54.224 real 0m4.733s 00:12:54.224 user 0m5.631s 00:12:54.224 sys 0m0.580s 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.224 07:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.224 ************************************ 00:12:54.224 END TEST raid_write_error_test 00:12:54.224 ************************************ 00:12:54.224 07:09:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:54.224 07:09:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:54.224 07:09:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:54.224 07:09:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.224 07:09:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.224 ************************************ 00:12:54.224 START TEST raid_state_function_test 00:12:54.224 ************************************ 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67706 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67706' 00:12:54.224 Process raid pid: 67706 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67706 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67706 ']' 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.224 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.224 [2024-11-20 07:09:36.307645] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:54.224 [2024-11-20 07:09:36.307776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.224 [2024-11-20 07:09:36.467273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.483 [2024-11-20 07:09:36.597735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.742 [2024-11-20 07:09:36.822912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.742 [2024-11-20 07:09:36.822959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.001 [2024-11-20 07:09:37.183240] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.001 [2024-11-20 07:09:37.183388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.001 [2024-11-20 07:09:37.183407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.001 [2024-11-20 07:09:37.183420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.001 [2024-11-20 07:09:37.183428] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.001 [2024-11-20 07:09:37.183438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.001 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.002 "name": "Existed_Raid", 00:12:55.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.002 "strip_size_kb": 0, 00:12:55.002 "state": "configuring", 00:12:55.002 "raid_level": "raid1", 00:12:55.002 "superblock": false, 00:12:55.002 "num_base_bdevs": 3, 00:12:55.002 "num_base_bdevs_discovered": 0, 00:12:55.002 "num_base_bdevs_operational": 3, 00:12:55.002 "base_bdevs_list": [ 00:12:55.002 { 00:12:55.002 "name": "BaseBdev1", 00:12:55.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.002 "is_configured": false, 00:12:55.002 "data_offset": 0, 00:12:55.002 "data_size": 0 00:12:55.002 }, 00:12:55.002 { 00:12:55.002 "name": "BaseBdev2", 00:12:55.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.002 "is_configured": false, 00:12:55.002 "data_offset": 0, 00:12:55.002 "data_size": 0 00:12:55.002 }, 00:12:55.002 { 00:12:55.002 "name": "BaseBdev3", 00:12:55.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.002 "is_configured": false, 00:12:55.002 "data_offset": 0, 00:12:55.002 "data_size": 0 00:12:55.002 } 00:12:55.002 ] 00:12:55.002 }' 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.002 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 [2024-11-20 07:09:37.654414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.570 [2024-11-20 07:09:37.654507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 [2024-11-20 07:09:37.666407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.570 [2024-11-20 07:09:37.666509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.570 [2024-11-20 07:09:37.666525] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.570 [2024-11-20 07:09:37.666538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.570 [2024-11-20 07:09:37.666546] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.570 [2024-11-20 07:09:37.666557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 [2024-11-20 07:09:37.722281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.570 BaseBdev1 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.570 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.570 [ 00:12:55.570 { 00:12:55.570 "name": "BaseBdev1", 00:12:55.570 "aliases": [ 00:12:55.570 "423a45d1-6f4b-4eb5-94f7-ce3266b5c104" 00:12:55.570 ], 00:12:55.570 "product_name": "Malloc disk", 00:12:55.570 "block_size": 512, 00:12:55.570 "num_blocks": 65536, 00:12:55.570 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:55.570 "assigned_rate_limits": { 00:12:55.570 "rw_ios_per_sec": 0, 00:12:55.570 "rw_mbytes_per_sec": 0, 00:12:55.570 "r_mbytes_per_sec": 0, 00:12:55.570 "w_mbytes_per_sec": 0 00:12:55.570 }, 00:12:55.570 "claimed": true, 00:12:55.570 "claim_type": "exclusive_write", 00:12:55.570 "zoned": false, 00:12:55.570 "supported_io_types": { 00:12:55.570 "read": true, 00:12:55.570 "write": true, 00:12:55.570 "unmap": true, 00:12:55.570 "flush": true, 00:12:55.570 "reset": true, 00:12:55.570 "nvme_admin": false, 00:12:55.570 "nvme_io": false, 00:12:55.570 "nvme_io_md": false, 00:12:55.570 "write_zeroes": true, 00:12:55.570 "zcopy": true, 00:12:55.570 "get_zone_info": false, 00:12:55.570 "zone_management": false, 00:12:55.570 "zone_append": false, 00:12:55.570 "compare": false, 00:12:55.570 "compare_and_write": false, 00:12:55.570 "abort": true, 00:12:55.570 "seek_hole": false, 00:12:55.570 "seek_data": false, 00:12:55.570 "copy": true, 00:12:55.570 "nvme_iov_md": false 00:12:55.570 }, 00:12:55.570 "memory_domains": [ 00:12:55.570 { 00:12:55.570 "dma_device_id": "system", 00:12:55.570 "dma_device_type": 1 00:12:55.571 }, 00:12:55.571 { 00:12:55.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.571 "dma_device_type": 2 00:12:55.571 } 00:12:55.571 ], 00:12:55.571 "driver_specific": {} 00:12:55.571 } 00:12:55.571 ] 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.571 "name": "Existed_Raid", 00:12:55.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.571 "strip_size_kb": 0, 00:12:55.571 "state": "configuring", 00:12:55.571 "raid_level": "raid1", 00:12:55.571 "superblock": false, 00:12:55.571 "num_base_bdevs": 3, 00:12:55.571 "num_base_bdevs_discovered": 1, 00:12:55.571 "num_base_bdevs_operational": 3, 00:12:55.571 "base_bdevs_list": [ 00:12:55.571 { 00:12:55.571 "name": "BaseBdev1", 00:12:55.571 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:55.571 "is_configured": true, 00:12:55.571 "data_offset": 0, 00:12:55.571 "data_size": 65536 00:12:55.571 }, 00:12:55.571 { 00:12:55.571 "name": "BaseBdev2", 00:12:55.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.571 "is_configured": false, 00:12:55.571 "data_offset": 0, 00:12:55.571 "data_size": 0 00:12:55.571 }, 00:12:55.571 { 00:12:55.571 "name": "BaseBdev3", 00:12:55.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.571 "is_configured": false, 00:12:55.571 "data_offset": 0, 00:12:55.571 "data_size": 0 00:12:55.571 } 00:12:55.571 ] 00:12:55.571 }' 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.571 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 [2024-11-20 07:09:38.197535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.138 [2024-11-20 07:09:38.197597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 [2024-11-20 07:09:38.205554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.138 [2024-11-20 07:09:38.207549] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.138 [2024-11-20 07:09:38.207644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.138 [2024-11-20 07:09:38.207680] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.138 [2024-11-20 07:09:38.207706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.138 "name": "Existed_Raid", 00:12:56.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.138 "strip_size_kb": 0, 00:12:56.138 "state": "configuring", 00:12:56.138 "raid_level": "raid1", 00:12:56.138 "superblock": false, 00:12:56.138 "num_base_bdevs": 3, 00:12:56.138 "num_base_bdevs_discovered": 1, 00:12:56.138 "num_base_bdevs_operational": 3, 00:12:56.138 "base_bdevs_list": [ 00:12:56.138 { 00:12:56.138 "name": "BaseBdev1", 00:12:56.138 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:56.138 "is_configured": true, 00:12:56.138 "data_offset": 0, 00:12:56.138 "data_size": 65536 00:12:56.138 }, 00:12:56.138 { 00:12:56.138 "name": "BaseBdev2", 00:12:56.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.138 "is_configured": false, 00:12:56.138 "data_offset": 0, 00:12:56.138 "data_size": 0 00:12:56.138 }, 00:12:56.138 { 00:12:56.138 "name": "BaseBdev3", 00:12:56.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.138 "is_configured": false, 00:12:56.138 "data_offset": 0, 00:12:56.138 "data_size": 0 00:12:56.138 } 00:12:56.138 ] 00:12:56.138 }' 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.138 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.706 [2024-11-20 07:09:38.709229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.706 BaseBdev2 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.706 [ 00:12:56.706 { 00:12:56.706 "name": "BaseBdev2", 00:12:56.706 "aliases": [ 00:12:56.706 "e8be5592-57c5-48a4-afad-fe62a30527dc" 00:12:56.706 ], 00:12:56.706 "product_name": "Malloc disk", 00:12:56.706 "block_size": 512, 00:12:56.706 "num_blocks": 65536, 00:12:56.706 "uuid": "e8be5592-57c5-48a4-afad-fe62a30527dc", 00:12:56.706 "assigned_rate_limits": { 00:12:56.706 "rw_ios_per_sec": 0, 00:12:56.706 "rw_mbytes_per_sec": 0, 00:12:56.706 "r_mbytes_per_sec": 0, 00:12:56.706 "w_mbytes_per_sec": 0 00:12:56.706 }, 00:12:56.706 "claimed": true, 00:12:56.706 "claim_type": "exclusive_write", 00:12:56.706 "zoned": false, 00:12:56.706 "supported_io_types": { 00:12:56.706 "read": true, 00:12:56.706 "write": true, 00:12:56.706 "unmap": true, 00:12:56.706 "flush": true, 00:12:56.706 "reset": true, 00:12:56.706 "nvme_admin": false, 00:12:56.706 "nvme_io": false, 00:12:56.706 "nvme_io_md": false, 00:12:56.706 "write_zeroes": true, 00:12:56.706 "zcopy": true, 00:12:56.706 "get_zone_info": false, 00:12:56.706 "zone_management": false, 00:12:56.706 "zone_append": false, 00:12:56.706 "compare": false, 00:12:56.706 "compare_and_write": false, 00:12:56.706 "abort": true, 00:12:56.706 "seek_hole": false, 00:12:56.706 "seek_data": false, 00:12:56.706 "copy": true, 00:12:56.706 "nvme_iov_md": false 00:12:56.706 }, 00:12:56.706 "memory_domains": [ 00:12:56.706 { 00:12:56.706 "dma_device_id": "system", 00:12:56.706 "dma_device_type": 1 00:12:56.706 }, 00:12:56.706 { 00:12:56.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.706 "dma_device_type": 2 00:12:56.706 } 00:12:56.706 ], 00:12:56.706 "driver_specific": {} 00:12:56.706 } 00:12:56.706 ] 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.706 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.707 "name": "Existed_Raid", 00:12:56.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.707 "strip_size_kb": 0, 00:12:56.707 "state": "configuring", 00:12:56.707 "raid_level": "raid1", 00:12:56.707 "superblock": false, 00:12:56.707 "num_base_bdevs": 3, 00:12:56.707 "num_base_bdevs_discovered": 2, 00:12:56.707 "num_base_bdevs_operational": 3, 00:12:56.707 "base_bdevs_list": [ 00:12:56.707 { 00:12:56.707 "name": "BaseBdev1", 00:12:56.707 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:56.707 "is_configured": true, 00:12:56.707 "data_offset": 0, 00:12:56.707 "data_size": 65536 00:12:56.707 }, 00:12:56.707 { 00:12:56.707 "name": "BaseBdev2", 00:12:56.707 "uuid": "e8be5592-57c5-48a4-afad-fe62a30527dc", 00:12:56.707 "is_configured": true, 00:12:56.707 "data_offset": 0, 00:12:56.707 "data_size": 65536 00:12:56.707 }, 00:12:56.707 { 00:12:56.707 "name": "BaseBdev3", 00:12:56.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.707 "is_configured": false, 00:12:56.707 "data_offset": 0, 00:12:56.707 "data_size": 0 00:12:56.707 } 00:12:56.707 ] 00:12:56.707 }' 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.707 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.966 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.966 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.966 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.224 [2024-11-20 07:09:39.259546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.224 [2024-11-20 07:09:39.259595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.224 [2024-11-20 07:09:39.259607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:57.224 [2024-11-20 07:09:39.259882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:57.224 [2024-11-20 07:09:39.260044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.224 [2024-11-20 07:09:39.260053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:57.224 [2024-11-20 07:09:39.260326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.224 BaseBdev3 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.224 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.224 [ 00:12:57.224 { 00:12:57.224 "name": "BaseBdev3", 00:12:57.224 "aliases": [ 00:12:57.224 "b91f9bfe-d3a9-4322-a38b-6bd467779227" 00:12:57.224 ], 00:12:57.224 "product_name": "Malloc disk", 00:12:57.224 "block_size": 512, 00:12:57.224 "num_blocks": 65536, 00:12:57.224 "uuid": "b91f9bfe-d3a9-4322-a38b-6bd467779227", 00:12:57.224 "assigned_rate_limits": { 00:12:57.224 "rw_ios_per_sec": 0, 00:12:57.224 "rw_mbytes_per_sec": 0, 00:12:57.224 "r_mbytes_per_sec": 0, 00:12:57.224 "w_mbytes_per_sec": 0 00:12:57.224 }, 00:12:57.224 "claimed": true, 00:12:57.224 "claim_type": "exclusive_write", 00:12:57.224 "zoned": false, 00:12:57.224 "supported_io_types": { 00:12:57.224 "read": true, 00:12:57.224 "write": true, 00:12:57.224 "unmap": true, 00:12:57.224 "flush": true, 00:12:57.224 "reset": true, 00:12:57.224 "nvme_admin": false, 00:12:57.224 "nvme_io": false, 00:12:57.224 "nvme_io_md": false, 00:12:57.224 "write_zeroes": true, 00:12:57.224 "zcopy": true, 00:12:57.224 "get_zone_info": false, 00:12:57.224 "zone_management": false, 00:12:57.224 "zone_append": false, 00:12:57.224 "compare": false, 00:12:57.224 "compare_and_write": false, 00:12:57.224 "abort": true, 00:12:57.224 "seek_hole": false, 00:12:57.224 "seek_data": false, 00:12:57.224 "copy": true, 00:12:57.224 "nvme_iov_md": false 00:12:57.224 }, 00:12:57.224 "memory_domains": [ 00:12:57.224 { 00:12:57.224 "dma_device_id": "system", 00:12:57.224 "dma_device_type": 1 00:12:57.224 }, 00:12:57.224 { 00:12:57.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.225 "dma_device_type": 2 00:12:57.225 } 00:12:57.225 ], 00:12:57.225 "driver_specific": {} 00:12:57.225 } 00:12:57.225 ] 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.225 "name": "Existed_Raid", 00:12:57.225 "uuid": "1671edde-d93a-418b-b7e7-58489950b192", 00:12:57.225 "strip_size_kb": 0, 00:12:57.225 "state": "online", 00:12:57.225 "raid_level": "raid1", 00:12:57.225 "superblock": false, 00:12:57.225 "num_base_bdevs": 3, 00:12:57.225 "num_base_bdevs_discovered": 3, 00:12:57.225 "num_base_bdevs_operational": 3, 00:12:57.225 "base_bdevs_list": [ 00:12:57.225 { 00:12:57.225 "name": "BaseBdev1", 00:12:57.225 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:57.225 "is_configured": true, 00:12:57.225 "data_offset": 0, 00:12:57.225 "data_size": 65536 00:12:57.225 }, 00:12:57.225 { 00:12:57.225 "name": "BaseBdev2", 00:12:57.225 "uuid": "e8be5592-57c5-48a4-afad-fe62a30527dc", 00:12:57.225 "is_configured": true, 00:12:57.225 "data_offset": 0, 00:12:57.225 "data_size": 65536 00:12:57.225 }, 00:12:57.225 { 00:12:57.225 "name": "BaseBdev3", 00:12:57.225 "uuid": "b91f9bfe-d3a9-4322-a38b-6bd467779227", 00:12:57.225 "is_configured": true, 00:12:57.225 "data_offset": 0, 00:12:57.225 "data_size": 65536 00:12:57.225 } 00:12:57.225 ] 00:12:57.225 }' 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.225 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.794 [2024-11-20 07:09:39.779098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.794 "name": "Existed_Raid", 00:12:57.794 "aliases": [ 00:12:57.794 "1671edde-d93a-418b-b7e7-58489950b192" 00:12:57.794 ], 00:12:57.794 "product_name": "Raid Volume", 00:12:57.794 "block_size": 512, 00:12:57.794 "num_blocks": 65536, 00:12:57.794 "uuid": "1671edde-d93a-418b-b7e7-58489950b192", 00:12:57.794 "assigned_rate_limits": { 00:12:57.794 "rw_ios_per_sec": 0, 00:12:57.794 "rw_mbytes_per_sec": 0, 00:12:57.794 "r_mbytes_per_sec": 0, 00:12:57.794 "w_mbytes_per_sec": 0 00:12:57.794 }, 00:12:57.794 "claimed": false, 00:12:57.794 "zoned": false, 00:12:57.794 "supported_io_types": { 00:12:57.794 "read": true, 00:12:57.794 "write": true, 00:12:57.794 "unmap": false, 00:12:57.794 "flush": false, 00:12:57.794 "reset": true, 00:12:57.794 "nvme_admin": false, 00:12:57.794 "nvme_io": false, 00:12:57.794 "nvme_io_md": false, 00:12:57.794 "write_zeroes": true, 00:12:57.794 "zcopy": false, 00:12:57.794 "get_zone_info": false, 00:12:57.794 "zone_management": false, 00:12:57.794 "zone_append": false, 00:12:57.794 "compare": false, 00:12:57.794 "compare_and_write": false, 00:12:57.794 "abort": false, 00:12:57.794 "seek_hole": false, 00:12:57.794 "seek_data": false, 00:12:57.794 "copy": false, 00:12:57.794 "nvme_iov_md": false 00:12:57.794 }, 00:12:57.794 "memory_domains": [ 00:12:57.794 { 00:12:57.794 "dma_device_id": "system", 00:12:57.794 "dma_device_type": 1 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.794 "dma_device_type": 2 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "dma_device_id": "system", 00:12:57.794 "dma_device_type": 1 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.794 "dma_device_type": 2 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "dma_device_id": "system", 00:12:57.794 "dma_device_type": 1 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.794 "dma_device_type": 2 00:12:57.794 } 00:12:57.794 ], 00:12:57.794 "driver_specific": { 00:12:57.794 "raid": { 00:12:57.794 "uuid": "1671edde-d93a-418b-b7e7-58489950b192", 00:12:57.794 "strip_size_kb": 0, 00:12:57.794 "state": "online", 00:12:57.794 "raid_level": "raid1", 00:12:57.794 "superblock": false, 00:12:57.794 "num_base_bdevs": 3, 00:12:57.794 "num_base_bdevs_discovered": 3, 00:12:57.794 "num_base_bdevs_operational": 3, 00:12:57.794 "base_bdevs_list": [ 00:12:57.794 { 00:12:57.794 "name": "BaseBdev1", 00:12:57.794 "uuid": "423a45d1-6f4b-4eb5-94f7-ce3266b5c104", 00:12:57.794 "is_configured": true, 00:12:57.794 "data_offset": 0, 00:12:57.794 "data_size": 65536 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "name": "BaseBdev2", 00:12:57.794 "uuid": "e8be5592-57c5-48a4-afad-fe62a30527dc", 00:12:57.794 "is_configured": true, 00:12:57.794 "data_offset": 0, 00:12:57.794 "data_size": 65536 00:12:57.794 }, 00:12:57.794 { 00:12:57.794 "name": "BaseBdev3", 00:12:57.794 "uuid": "b91f9bfe-d3a9-4322-a38b-6bd467779227", 00:12:57.794 "is_configured": true, 00:12:57.794 "data_offset": 0, 00:12:57.794 "data_size": 65536 00:12:57.794 } 00:12:57.794 ] 00:12:57.794 } 00:12:57.794 } 00:12:57.794 }' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:57.794 BaseBdev2 00:12:57.794 BaseBdev3' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.794 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.795 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.795 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.795 [2024-11-20 07:09:40.054404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.056 "name": "Existed_Raid", 00:12:58.056 "uuid": "1671edde-d93a-418b-b7e7-58489950b192", 00:12:58.056 "strip_size_kb": 0, 00:12:58.056 "state": "online", 00:12:58.056 "raid_level": "raid1", 00:12:58.056 "superblock": false, 00:12:58.056 "num_base_bdevs": 3, 00:12:58.056 "num_base_bdevs_discovered": 2, 00:12:58.056 "num_base_bdevs_operational": 2, 00:12:58.056 "base_bdevs_list": [ 00:12:58.056 { 00:12:58.056 "name": null, 00:12:58.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.056 "is_configured": false, 00:12:58.056 "data_offset": 0, 00:12:58.056 "data_size": 65536 00:12:58.056 }, 00:12:58.056 { 00:12:58.056 "name": "BaseBdev2", 00:12:58.056 "uuid": "e8be5592-57c5-48a4-afad-fe62a30527dc", 00:12:58.056 "is_configured": true, 00:12:58.056 "data_offset": 0, 00:12:58.056 "data_size": 65536 00:12:58.056 }, 00:12:58.056 { 00:12:58.056 "name": "BaseBdev3", 00:12:58.056 "uuid": "b91f9bfe-d3a9-4322-a38b-6bd467779227", 00:12:58.056 "is_configured": true, 00:12:58.056 "data_offset": 0, 00:12:58.056 "data_size": 65536 00:12:58.056 } 00:12:58.056 ] 00:12:58.056 }' 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.056 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.623 [2024-11-20 07:09:40.651732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.623 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.623 [2024-11-20 07:09:40.815046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.623 [2024-11-20 07:09:40.815156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.883 [2024-11-20 07:09:40.930658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.883 [2024-11-20 07:09:40.930780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.883 [2024-11-20 07:09:40.930831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 BaseBdev2 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 [ 00:12:58.883 { 00:12:58.883 "name": "BaseBdev2", 00:12:58.883 "aliases": [ 00:12:58.883 "52801767-cf59-4c9c-aa15-3a077aa342e7" 00:12:58.883 ], 00:12:58.883 "product_name": "Malloc disk", 00:12:58.883 "block_size": 512, 00:12:58.883 "num_blocks": 65536, 00:12:58.883 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:12:58.883 "assigned_rate_limits": { 00:12:58.883 "rw_ios_per_sec": 0, 00:12:58.883 "rw_mbytes_per_sec": 0, 00:12:58.883 "r_mbytes_per_sec": 0, 00:12:58.883 "w_mbytes_per_sec": 0 00:12:58.883 }, 00:12:58.883 "claimed": false, 00:12:58.883 "zoned": false, 00:12:58.883 "supported_io_types": { 00:12:58.883 "read": true, 00:12:58.883 "write": true, 00:12:58.883 "unmap": true, 00:12:58.883 "flush": true, 00:12:58.883 "reset": true, 00:12:58.883 "nvme_admin": false, 00:12:58.883 "nvme_io": false, 00:12:58.883 "nvme_io_md": false, 00:12:58.883 "write_zeroes": true, 00:12:58.883 "zcopy": true, 00:12:58.883 "get_zone_info": false, 00:12:58.883 "zone_management": false, 00:12:58.883 "zone_append": false, 00:12:58.883 "compare": false, 00:12:58.883 "compare_and_write": false, 00:12:58.883 "abort": true, 00:12:58.883 "seek_hole": false, 00:12:58.883 "seek_data": false, 00:12:58.883 "copy": true, 00:12:58.883 "nvme_iov_md": false 00:12:58.883 }, 00:12:58.883 "memory_domains": [ 00:12:58.883 { 00:12:58.883 "dma_device_id": "system", 00:12:58.883 "dma_device_type": 1 00:12:58.883 }, 00:12:58.883 { 00:12:58.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.883 "dma_device_type": 2 00:12:58.883 } 00:12:58.883 ], 00:12:58.883 "driver_specific": {} 00:12:58.883 } 00:12:58.883 ] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 BaseBdev3 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.142 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.143 [ 00:12:59.143 { 00:12:59.143 "name": "BaseBdev3", 00:12:59.143 "aliases": [ 00:12:59.143 "fb0c5684-2679-4084-b251-b90caa83c26f" 00:12:59.143 ], 00:12:59.143 "product_name": "Malloc disk", 00:12:59.143 "block_size": 512, 00:12:59.143 "num_blocks": 65536, 00:12:59.143 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:12:59.143 "assigned_rate_limits": { 00:12:59.143 "rw_ios_per_sec": 0, 00:12:59.143 "rw_mbytes_per_sec": 0, 00:12:59.143 "r_mbytes_per_sec": 0, 00:12:59.143 "w_mbytes_per_sec": 0 00:12:59.143 }, 00:12:59.143 "claimed": false, 00:12:59.143 "zoned": false, 00:12:59.143 "supported_io_types": { 00:12:59.143 "read": true, 00:12:59.143 "write": true, 00:12:59.143 "unmap": true, 00:12:59.143 "flush": true, 00:12:59.143 "reset": true, 00:12:59.143 "nvme_admin": false, 00:12:59.143 "nvme_io": false, 00:12:59.143 "nvme_io_md": false, 00:12:59.143 "write_zeroes": true, 00:12:59.143 "zcopy": true, 00:12:59.143 "get_zone_info": false, 00:12:59.143 "zone_management": false, 00:12:59.143 "zone_append": false, 00:12:59.143 "compare": false, 00:12:59.143 "compare_and_write": false, 00:12:59.143 "abort": true, 00:12:59.143 "seek_hole": false, 00:12:59.143 "seek_data": false, 00:12:59.143 "copy": true, 00:12:59.143 "nvme_iov_md": false 00:12:59.143 }, 00:12:59.143 "memory_domains": [ 00:12:59.143 { 00:12:59.143 "dma_device_id": "system", 00:12:59.143 "dma_device_type": 1 00:12:59.143 }, 00:12:59.143 { 00:12:59.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.143 "dma_device_type": 2 00:12:59.143 } 00:12:59.143 ], 00:12:59.143 "driver_specific": {} 00:12:59.143 } 00:12:59.143 ] 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.143 [2024-11-20 07:09:41.179179] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.143 [2024-11-20 07:09:41.179301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.143 [2024-11-20 07:09:41.179398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.143 [2024-11-20 07:09:41.181789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.143 "name": "Existed_Raid", 00:12:59.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.143 "strip_size_kb": 0, 00:12:59.143 "state": "configuring", 00:12:59.143 "raid_level": "raid1", 00:12:59.143 "superblock": false, 00:12:59.143 "num_base_bdevs": 3, 00:12:59.143 "num_base_bdevs_discovered": 2, 00:12:59.143 "num_base_bdevs_operational": 3, 00:12:59.143 "base_bdevs_list": [ 00:12:59.143 { 00:12:59.143 "name": "BaseBdev1", 00:12:59.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.143 "is_configured": false, 00:12:59.143 "data_offset": 0, 00:12:59.143 "data_size": 0 00:12:59.143 }, 00:12:59.143 { 00:12:59.143 "name": "BaseBdev2", 00:12:59.143 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:12:59.143 "is_configured": true, 00:12:59.143 "data_offset": 0, 00:12:59.143 "data_size": 65536 00:12:59.143 }, 00:12:59.143 { 00:12:59.143 "name": "BaseBdev3", 00:12:59.143 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:12:59.143 "is_configured": true, 00:12:59.143 "data_offset": 0, 00:12:59.143 "data_size": 65536 00:12:59.143 } 00:12:59.143 ] 00:12:59.143 }' 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.143 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.402 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:59.402 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.402 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.403 [2024-11-20 07:09:41.630475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.403 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.661 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.662 "name": "Existed_Raid", 00:12:59.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.662 "strip_size_kb": 0, 00:12:59.662 "state": "configuring", 00:12:59.662 "raid_level": "raid1", 00:12:59.662 "superblock": false, 00:12:59.662 "num_base_bdevs": 3, 00:12:59.662 "num_base_bdevs_discovered": 1, 00:12:59.662 "num_base_bdevs_operational": 3, 00:12:59.662 "base_bdevs_list": [ 00:12:59.662 { 00:12:59.662 "name": "BaseBdev1", 00:12:59.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.662 "is_configured": false, 00:12:59.662 "data_offset": 0, 00:12:59.662 "data_size": 0 00:12:59.662 }, 00:12:59.662 { 00:12:59.662 "name": null, 00:12:59.662 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:12:59.662 "is_configured": false, 00:12:59.662 "data_offset": 0, 00:12:59.662 "data_size": 65536 00:12:59.662 }, 00:12:59.662 { 00:12:59.662 "name": "BaseBdev3", 00:12:59.662 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:12:59.662 "is_configured": true, 00:12:59.662 "data_offset": 0, 00:12:59.662 "data_size": 65536 00:12:59.662 } 00:12:59.662 ] 00:12:59.662 }' 00:12:59.662 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.662 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 [2024-11-20 07:09:42.152404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.920 BaseBdev1 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.920 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 [ 00:12:59.920 { 00:12:59.920 "name": "BaseBdev1", 00:12:59.920 "aliases": [ 00:12:59.920 "7366c402-7a4c-48bd-97a4-b311a273ac7f" 00:12:59.920 ], 00:12:59.920 "product_name": "Malloc disk", 00:12:59.920 "block_size": 512, 00:12:59.920 "num_blocks": 65536, 00:12:59.920 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:12:59.920 "assigned_rate_limits": { 00:12:59.920 "rw_ios_per_sec": 0, 00:12:59.920 "rw_mbytes_per_sec": 0, 00:12:59.920 "r_mbytes_per_sec": 0, 00:12:59.920 "w_mbytes_per_sec": 0 00:12:59.920 }, 00:12:59.920 "claimed": true, 00:12:59.921 "claim_type": "exclusive_write", 00:12:59.921 "zoned": false, 00:12:59.921 "supported_io_types": { 00:12:59.921 "read": true, 00:12:59.921 "write": true, 00:12:59.921 "unmap": true, 00:12:59.921 "flush": true, 00:12:59.921 "reset": true, 00:12:59.921 "nvme_admin": false, 00:12:59.921 "nvme_io": false, 00:12:59.921 "nvme_io_md": false, 00:12:59.921 "write_zeroes": true, 00:12:59.921 "zcopy": true, 00:12:59.921 "get_zone_info": false, 00:12:59.921 "zone_management": false, 00:13:00.180 "zone_append": false, 00:13:00.180 "compare": false, 00:13:00.180 "compare_and_write": false, 00:13:00.180 "abort": true, 00:13:00.180 "seek_hole": false, 00:13:00.180 "seek_data": false, 00:13:00.180 "copy": true, 00:13:00.180 "nvme_iov_md": false 00:13:00.180 }, 00:13:00.180 "memory_domains": [ 00:13:00.180 { 00:13:00.180 "dma_device_id": "system", 00:13:00.180 "dma_device_type": 1 00:13:00.180 }, 00:13:00.180 { 00:13:00.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.180 "dma_device_type": 2 00:13:00.180 } 00:13:00.180 ], 00:13:00.180 "driver_specific": {} 00:13:00.180 } 00:13:00.180 ] 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.180 "name": "Existed_Raid", 00:13:00.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.180 "strip_size_kb": 0, 00:13:00.180 "state": "configuring", 00:13:00.180 "raid_level": "raid1", 00:13:00.180 "superblock": false, 00:13:00.180 "num_base_bdevs": 3, 00:13:00.180 "num_base_bdevs_discovered": 2, 00:13:00.180 "num_base_bdevs_operational": 3, 00:13:00.180 "base_bdevs_list": [ 00:13:00.180 { 00:13:00.180 "name": "BaseBdev1", 00:13:00.180 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:00.180 "is_configured": true, 00:13:00.180 "data_offset": 0, 00:13:00.180 "data_size": 65536 00:13:00.180 }, 00:13:00.180 { 00:13:00.180 "name": null, 00:13:00.180 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:00.180 "is_configured": false, 00:13:00.180 "data_offset": 0, 00:13:00.180 "data_size": 65536 00:13:00.180 }, 00:13:00.180 { 00:13:00.180 "name": "BaseBdev3", 00:13:00.180 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:00.180 "is_configured": true, 00:13:00.180 "data_offset": 0, 00:13:00.180 "data_size": 65536 00:13:00.180 } 00:13:00.180 ] 00:13:00.180 }' 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.180 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.439 [2024-11-20 07:09:42.683571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.439 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.697 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.697 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.697 "name": "Existed_Raid", 00:13:00.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.697 "strip_size_kb": 0, 00:13:00.697 "state": "configuring", 00:13:00.697 "raid_level": "raid1", 00:13:00.697 "superblock": false, 00:13:00.697 "num_base_bdevs": 3, 00:13:00.697 "num_base_bdevs_discovered": 1, 00:13:00.697 "num_base_bdevs_operational": 3, 00:13:00.697 "base_bdevs_list": [ 00:13:00.697 { 00:13:00.697 "name": "BaseBdev1", 00:13:00.697 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:00.697 "is_configured": true, 00:13:00.697 "data_offset": 0, 00:13:00.697 "data_size": 65536 00:13:00.697 }, 00:13:00.697 { 00:13:00.697 "name": null, 00:13:00.698 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:00.698 "is_configured": false, 00:13:00.698 "data_offset": 0, 00:13:00.698 "data_size": 65536 00:13:00.698 }, 00:13:00.698 { 00:13:00.698 "name": null, 00:13:00.698 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:00.698 "is_configured": false, 00:13:00.698 "data_offset": 0, 00:13:00.698 "data_size": 65536 00:13:00.698 } 00:13:00.698 ] 00:13:00.698 }' 00:13:00.698 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.698 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.957 [2024-11-20 07:09:43.158742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.957 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.958 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.958 "name": "Existed_Raid", 00:13:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.958 "strip_size_kb": 0, 00:13:00.958 "state": "configuring", 00:13:00.958 "raid_level": "raid1", 00:13:00.958 "superblock": false, 00:13:00.958 "num_base_bdevs": 3, 00:13:00.958 "num_base_bdevs_discovered": 2, 00:13:00.958 "num_base_bdevs_operational": 3, 00:13:00.958 "base_bdevs_list": [ 00:13:00.958 { 00:13:00.958 "name": "BaseBdev1", 00:13:00.958 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:00.958 "is_configured": true, 00:13:00.958 "data_offset": 0, 00:13:00.958 "data_size": 65536 00:13:00.958 }, 00:13:00.958 { 00:13:00.958 "name": null, 00:13:00.958 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:00.958 "is_configured": false, 00:13:00.958 "data_offset": 0, 00:13:00.958 "data_size": 65536 00:13:00.958 }, 00:13:00.958 { 00:13:00.958 "name": "BaseBdev3", 00:13:00.958 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:00.958 "is_configured": true, 00:13:00.958 "data_offset": 0, 00:13:00.958 "data_size": 65536 00:13:00.958 } 00:13:00.958 ] 00:13:00.958 }' 00:13:00.958 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.958 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.526 [2024-11-20 07:09:43.637966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.526 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.527 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.527 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.527 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.527 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.785 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.785 "name": "Existed_Raid", 00:13:01.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.785 "strip_size_kb": 0, 00:13:01.785 "state": "configuring", 00:13:01.785 "raid_level": "raid1", 00:13:01.785 "superblock": false, 00:13:01.785 "num_base_bdevs": 3, 00:13:01.785 "num_base_bdevs_discovered": 1, 00:13:01.785 "num_base_bdevs_operational": 3, 00:13:01.785 "base_bdevs_list": [ 00:13:01.785 { 00:13:01.785 "name": null, 00:13:01.785 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:01.785 "is_configured": false, 00:13:01.785 "data_offset": 0, 00:13:01.785 "data_size": 65536 00:13:01.785 }, 00:13:01.785 { 00:13:01.785 "name": null, 00:13:01.785 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:01.785 "is_configured": false, 00:13:01.785 "data_offset": 0, 00:13:01.785 "data_size": 65536 00:13:01.785 }, 00:13:01.785 { 00:13:01.785 "name": "BaseBdev3", 00:13:01.785 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:01.785 "is_configured": true, 00:13:01.785 "data_offset": 0, 00:13:01.785 "data_size": 65536 00:13:01.785 } 00:13:01.785 ] 00:13:01.785 }' 00:13:01.785 07:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.785 07:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.044 [2024-11-20 07:09:44.261099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.044 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.303 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.303 "name": "Existed_Raid", 00:13:02.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.303 "strip_size_kb": 0, 00:13:02.303 "state": "configuring", 00:13:02.303 "raid_level": "raid1", 00:13:02.303 "superblock": false, 00:13:02.303 "num_base_bdevs": 3, 00:13:02.303 "num_base_bdevs_discovered": 2, 00:13:02.303 "num_base_bdevs_operational": 3, 00:13:02.303 "base_bdevs_list": [ 00:13:02.303 { 00:13:02.303 "name": null, 00:13:02.303 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:02.303 "is_configured": false, 00:13:02.303 "data_offset": 0, 00:13:02.303 "data_size": 65536 00:13:02.303 }, 00:13:02.303 { 00:13:02.303 "name": "BaseBdev2", 00:13:02.303 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:02.303 "is_configured": true, 00:13:02.303 "data_offset": 0, 00:13:02.303 "data_size": 65536 00:13:02.303 }, 00:13:02.303 { 00:13:02.303 "name": "BaseBdev3", 00:13:02.303 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:02.303 "is_configured": true, 00:13:02.303 "data_offset": 0, 00:13:02.303 "data_size": 65536 00:13:02.303 } 00:13:02.303 ] 00:13:02.303 }' 00:13:02.303 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.303 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7366c402-7a4c-48bd-97a4-b311a273ac7f 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.562 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.821 [2024-11-20 07:09:44.844955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:02.821 [2024-11-20 07:09:44.845016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:02.821 [2024-11-20 07:09:44.845024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:02.821 [2024-11-20 07:09:44.845336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:02.821 [2024-11-20 07:09:44.845556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:02.821 [2024-11-20 07:09:44.845579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:02.821 [2024-11-20 07:09:44.845869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.821 NewBaseBdev 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.821 [ 00:13:02.821 { 00:13:02.821 "name": "NewBaseBdev", 00:13:02.821 "aliases": [ 00:13:02.821 "7366c402-7a4c-48bd-97a4-b311a273ac7f" 00:13:02.821 ], 00:13:02.821 "product_name": "Malloc disk", 00:13:02.821 "block_size": 512, 00:13:02.821 "num_blocks": 65536, 00:13:02.821 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:02.821 "assigned_rate_limits": { 00:13:02.821 "rw_ios_per_sec": 0, 00:13:02.821 "rw_mbytes_per_sec": 0, 00:13:02.821 "r_mbytes_per_sec": 0, 00:13:02.821 "w_mbytes_per_sec": 0 00:13:02.821 }, 00:13:02.821 "claimed": true, 00:13:02.821 "claim_type": "exclusive_write", 00:13:02.821 "zoned": false, 00:13:02.821 "supported_io_types": { 00:13:02.821 "read": true, 00:13:02.821 "write": true, 00:13:02.821 "unmap": true, 00:13:02.821 "flush": true, 00:13:02.821 "reset": true, 00:13:02.821 "nvme_admin": false, 00:13:02.821 "nvme_io": false, 00:13:02.821 "nvme_io_md": false, 00:13:02.821 "write_zeroes": true, 00:13:02.821 "zcopy": true, 00:13:02.821 "get_zone_info": false, 00:13:02.821 "zone_management": false, 00:13:02.821 "zone_append": false, 00:13:02.821 "compare": false, 00:13:02.821 "compare_and_write": false, 00:13:02.821 "abort": true, 00:13:02.821 "seek_hole": false, 00:13:02.821 "seek_data": false, 00:13:02.821 "copy": true, 00:13:02.821 "nvme_iov_md": false 00:13:02.821 }, 00:13:02.821 "memory_domains": [ 00:13:02.821 { 00:13:02.821 "dma_device_id": "system", 00:13:02.821 "dma_device_type": 1 00:13:02.821 }, 00:13:02.821 { 00:13:02.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.821 "dma_device_type": 2 00:13:02.821 } 00:13:02.821 ], 00:13:02.821 "driver_specific": {} 00:13:02.821 } 00:13:02.821 ] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.821 "name": "Existed_Raid", 00:13:02.821 "uuid": "168860fd-2d15-44ee-ab36-1272350cb2c6", 00:13:02.821 "strip_size_kb": 0, 00:13:02.821 "state": "online", 00:13:02.821 "raid_level": "raid1", 00:13:02.821 "superblock": false, 00:13:02.821 "num_base_bdevs": 3, 00:13:02.821 "num_base_bdevs_discovered": 3, 00:13:02.821 "num_base_bdevs_operational": 3, 00:13:02.821 "base_bdevs_list": [ 00:13:02.821 { 00:13:02.821 "name": "NewBaseBdev", 00:13:02.821 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:02.821 "is_configured": true, 00:13:02.821 "data_offset": 0, 00:13:02.821 "data_size": 65536 00:13:02.821 }, 00:13:02.821 { 00:13:02.821 "name": "BaseBdev2", 00:13:02.821 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:02.821 "is_configured": true, 00:13:02.821 "data_offset": 0, 00:13:02.821 "data_size": 65536 00:13:02.821 }, 00:13:02.821 { 00:13:02.821 "name": "BaseBdev3", 00:13:02.821 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:02.821 "is_configured": true, 00:13:02.821 "data_offset": 0, 00:13:02.821 "data_size": 65536 00:13:02.821 } 00:13:02.821 ] 00:13:02.821 }' 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.821 07:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.080 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 [2024-11-20 07:09:45.328633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.339 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.339 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.339 "name": "Existed_Raid", 00:13:03.340 "aliases": [ 00:13:03.340 "168860fd-2d15-44ee-ab36-1272350cb2c6" 00:13:03.340 ], 00:13:03.340 "product_name": "Raid Volume", 00:13:03.340 "block_size": 512, 00:13:03.340 "num_blocks": 65536, 00:13:03.340 "uuid": "168860fd-2d15-44ee-ab36-1272350cb2c6", 00:13:03.340 "assigned_rate_limits": { 00:13:03.340 "rw_ios_per_sec": 0, 00:13:03.340 "rw_mbytes_per_sec": 0, 00:13:03.340 "r_mbytes_per_sec": 0, 00:13:03.340 "w_mbytes_per_sec": 0 00:13:03.340 }, 00:13:03.340 "claimed": false, 00:13:03.340 "zoned": false, 00:13:03.340 "supported_io_types": { 00:13:03.340 "read": true, 00:13:03.340 "write": true, 00:13:03.340 "unmap": false, 00:13:03.340 "flush": false, 00:13:03.340 "reset": true, 00:13:03.340 "nvme_admin": false, 00:13:03.340 "nvme_io": false, 00:13:03.340 "nvme_io_md": false, 00:13:03.340 "write_zeroes": true, 00:13:03.340 "zcopy": false, 00:13:03.340 "get_zone_info": false, 00:13:03.340 "zone_management": false, 00:13:03.340 "zone_append": false, 00:13:03.340 "compare": false, 00:13:03.340 "compare_and_write": false, 00:13:03.340 "abort": false, 00:13:03.340 "seek_hole": false, 00:13:03.340 "seek_data": false, 00:13:03.340 "copy": false, 00:13:03.340 "nvme_iov_md": false 00:13:03.340 }, 00:13:03.340 "memory_domains": [ 00:13:03.340 { 00:13:03.340 "dma_device_id": "system", 00:13:03.340 "dma_device_type": 1 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.340 "dma_device_type": 2 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "dma_device_id": "system", 00:13:03.340 "dma_device_type": 1 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.340 "dma_device_type": 2 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "dma_device_id": "system", 00:13:03.340 "dma_device_type": 1 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.340 "dma_device_type": 2 00:13:03.340 } 00:13:03.340 ], 00:13:03.340 "driver_specific": { 00:13:03.340 "raid": { 00:13:03.340 "uuid": "168860fd-2d15-44ee-ab36-1272350cb2c6", 00:13:03.340 "strip_size_kb": 0, 00:13:03.340 "state": "online", 00:13:03.340 "raid_level": "raid1", 00:13:03.340 "superblock": false, 00:13:03.340 "num_base_bdevs": 3, 00:13:03.340 "num_base_bdevs_discovered": 3, 00:13:03.340 "num_base_bdevs_operational": 3, 00:13:03.340 "base_bdevs_list": [ 00:13:03.340 { 00:13:03.340 "name": "NewBaseBdev", 00:13:03.340 "uuid": "7366c402-7a4c-48bd-97a4-b311a273ac7f", 00:13:03.340 "is_configured": true, 00:13:03.340 "data_offset": 0, 00:13:03.340 "data_size": 65536 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "name": "BaseBdev2", 00:13:03.340 "uuid": "52801767-cf59-4c9c-aa15-3a077aa342e7", 00:13:03.340 "is_configured": true, 00:13:03.340 "data_offset": 0, 00:13:03.340 "data_size": 65536 00:13:03.340 }, 00:13:03.340 { 00:13:03.340 "name": "BaseBdev3", 00:13:03.340 "uuid": "fb0c5684-2679-4084-b251-b90caa83c26f", 00:13:03.340 "is_configured": true, 00:13:03.340 "data_offset": 0, 00:13:03.340 "data_size": 65536 00:13:03.340 } 00:13:03.340 ] 00:13:03.340 } 00:13:03.340 } 00:13:03.340 }' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:03.340 BaseBdev2 00:13:03.340 BaseBdev3' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.340 [2024-11-20 07:09:45.587802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.340 [2024-11-20 07:09:45.587840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.340 [2024-11-20 07:09:45.587927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.340 [2024-11-20 07:09:45.588248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.340 [2024-11-20 07:09:45.588265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67706 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67706 ']' 00:13:03.340 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67706 00:13:03.341 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:03.341 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.341 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67706 00:13:03.599 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.599 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.599 killing process with pid 67706 00:13:03.599 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67706' 00:13:03.599 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67706 00:13:03.599 [2024-11-20 07:09:45.628287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.599 07:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67706 00:13:03.858 [2024-11-20 07:09:45.955032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:05.266 00:13:05.266 real 0m10.912s 00:13:05.266 user 0m17.338s 00:13:05.266 sys 0m1.885s 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.266 ************************************ 00:13:05.266 END TEST raid_state_function_test 00:13:05.266 ************************************ 00:13:05.266 07:09:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:05.266 07:09:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:05.266 07:09:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.266 07:09:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.266 ************************************ 00:13:05.266 START TEST raid_state_function_test_sb 00:13:05.266 ************************************ 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:05.266 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68333 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:05.267 Process raid pid: 68333 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68333' 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68333 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68333 ']' 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.267 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.267 [2024-11-20 07:09:47.287304] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:05.267 [2024-11-20 07:09:47.287454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.267 [2024-11-20 07:09:47.466164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.525 [2024-11-20 07:09:47.599766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.784 [2024-11-20 07:09:47.817314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.784 [2024-11-20 07:09:47.817375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.043 [2024-11-20 07:09:48.154872] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.043 [2024-11-20 07:09:48.154925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.043 [2024-11-20 07:09:48.154938] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.043 [2024-11-20 07:09:48.154947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.043 [2024-11-20 07:09:48.154954] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.043 [2024-11-20 07:09:48.154963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.043 "name": "Existed_Raid", 00:13:06.043 "uuid": "08e22228-91d1-4b26-aeed-1d8be9687c75", 00:13:06.043 "strip_size_kb": 0, 00:13:06.043 "state": "configuring", 00:13:06.043 "raid_level": "raid1", 00:13:06.043 "superblock": true, 00:13:06.043 "num_base_bdevs": 3, 00:13:06.043 "num_base_bdevs_discovered": 0, 00:13:06.043 "num_base_bdevs_operational": 3, 00:13:06.043 "base_bdevs_list": [ 00:13:06.043 { 00:13:06.043 "name": "BaseBdev1", 00:13:06.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.043 "is_configured": false, 00:13:06.043 "data_offset": 0, 00:13:06.043 "data_size": 0 00:13:06.043 }, 00:13:06.043 { 00:13:06.043 "name": "BaseBdev2", 00:13:06.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.043 "is_configured": false, 00:13:06.043 "data_offset": 0, 00:13:06.043 "data_size": 0 00:13:06.043 }, 00:13:06.043 { 00:13:06.043 "name": "BaseBdev3", 00:13:06.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.043 "is_configured": false, 00:13:06.043 "data_offset": 0, 00:13:06.043 "data_size": 0 00:13:06.043 } 00:13:06.043 ] 00:13:06.043 }' 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.043 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.303 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:06.303 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.303 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.303 [2024-11-20 07:09:48.566251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:06.303 [2024-11-20 07:09:48.566304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.562 [2024-11-20 07:09:48.578227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.562 [2024-11-20 07:09:48.578286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.562 [2024-11-20 07:09:48.578299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.562 [2024-11-20 07:09:48.578316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.562 [2024-11-20 07:09:48.578324] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.562 [2024-11-20 07:09:48.578346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.562 [2024-11-20 07:09:48.630995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.562 BaseBdev1 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.562 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.562 [ 00:13:06.562 { 00:13:06.562 "name": "BaseBdev1", 00:13:06.562 "aliases": [ 00:13:06.562 "bea657f4-be97-4521-aa53-18c62ae69984" 00:13:06.562 ], 00:13:06.562 "product_name": "Malloc disk", 00:13:06.562 "block_size": 512, 00:13:06.562 "num_blocks": 65536, 00:13:06.562 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:06.562 "assigned_rate_limits": { 00:13:06.562 "rw_ios_per_sec": 0, 00:13:06.562 "rw_mbytes_per_sec": 0, 00:13:06.563 "r_mbytes_per_sec": 0, 00:13:06.563 "w_mbytes_per_sec": 0 00:13:06.563 }, 00:13:06.563 "claimed": true, 00:13:06.563 "claim_type": "exclusive_write", 00:13:06.563 "zoned": false, 00:13:06.563 "supported_io_types": { 00:13:06.563 "read": true, 00:13:06.563 "write": true, 00:13:06.563 "unmap": true, 00:13:06.563 "flush": true, 00:13:06.563 "reset": true, 00:13:06.563 "nvme_admin": false, 00:13:06.563 "nvme_io": false, 00:13:06.563 "nvme_io_md": false, 00:13:06.563 "write_zeroes": true, 00:13:06.563 "zcopy": true, 00:13:06.563 "get_zone_info": false, 00:13:06.563 "zone_management": false, 00:13:06.563 "zone_append": false, 00:13:06.563 "compare": false, 00:13:06.563 "compare_and_write": false, 00:13:06.563 "abort": true, 00:13:06.563 "seek_hole": false, 00:13:06.563 "seek_data": false, 00:13:06.563 "copy": true, 00:13:06.563 "nvme_iov_md": false 00:13:06.563 }, 00:13:06.563 "memory_domains": [ 00:13:06.563 { 00:13:06.563 "dma_device_id": "system", 00:13:06.563 "dma_device_type": 1 00:13:06.563 }, 00:13:06.563 { 00:13:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.563 "dma_device_type": 2 00:13:06.563 } 00:13:06.563 ], 00:13:06.563 "driver_specific": {} 00:13:06.563 } 00:13:06.563 ] 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.563 "name": "Existed_Raid", 00:13:06.563 "uuid": "25289ccb-d02e-4462-96b2-3e6a9e396da2", 00:13:06.563 "strip_size_kb": 0, 00:13:06.563 "state": "configuring", 00:13:06.563 "raid_level": "raid1", 00:13:06.563 "superblock": true, 00:13:06.563 "num_base_bdevs": 3, 00:13:06.563 "num_base_bdevs_discovered": 1, 00:13:06.563 "num_base_bdevs_operational": 3, 00:13:06.563 "base_bdevs_list": [ 00:13:06.563 { 00:13:06.563 "name": "BaseBdev1", 00:13:06.563 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:06.563 "is_configured": true, 00:13:06.563 "data_offset": 2048, 00:13:06.563 "data_size": 63488 00:13:06.563 }, 00:13:06.563 { 00:13:06.563 "name": "BaseBdev2", 00:13:06.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.563 "is_configured": false, 00:13:06.563 "data_offset": 0, 00:13:06.563 "data_size": 0 00:13:06.563 }, 00:13:06.563 { 00:13:06.563 "name": "BaseBdev3", 00:13:06.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.563 "is_configured": false, 00:13:06.563 "data_offset": 0, 00:13:06.563 "data_size": 0 00:13:06.563 } 00:13:06.563 ] 00:13:06.563 }' 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.563 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.130 [2024-11-20 07:09:49.098273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.130 [2024-11-20 07:09:49.098349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.130 [2024-11-20 07:09:49.106309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.130 [2024-11-20 07:09:49.108378] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.130 [2024-11-20 07:09:49.108422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.130 [2024-11-20 07:09:49.108435] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.130 [2024-11-20 07:09:49.108445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.130 "name": "Existed_Raid", 00:13:07.130 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:07.130 "strip_size_kb": 0, 00:13:07.130 "state": "configuring", 00:13:07.130 "raid_level": "raid1", 00:13:07.130 "superblock": true, 00:13:07.130 "num_base_bdevs": 3, 00:13:07.130 "num_base_bdevs_discovered": 1, 00:13:07.130 "num_base_bdevs_operational": 3, 00:13:07.130 "base_bdevs_list": [ 00:13:07.130 { 00:13:07.130 "name": "BaseBdev1", 00:13:07.130 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:07.130 "is_configured": true, 00:13:07.130 "data_offset": 2048, 00:13:07.130 "data_size": 63488 00:13:07.130 }, 00:13:07.130 { 00:13:07.130 "name": "BaseBdev2", 00:13:07.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.130 "is_configured": false, 00:13:07.130 "data_offset": 0, 00:13:07.130 "data_size": 0 00:13:07.130 }, 00:13:07.130 { 00:13:07.130 "name": "BaseBdev3", 00:13:07.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.130 "is_configured": false, 00:13:07.130 "data_offset": 0, 00:13:07.130 "data_size": 0 00:13:07.130 } 00:13:07.130 ] 00:13:07.130 }' 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.130 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.390 [2024-11-20 07:09:49.509061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.390 BaseBdev2 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.390 [ 00:13:07.390 { 00:13:07.390 "name": "BaseBdev2", 00:13:07.390 "aliases": [ 00:13:07.390 "1f048877-cb32-49cf-83be-21813ebb9e5d" 00:13:07.390 ], 00:13:07.390 "product_name": "Malloc disk", 00:13:07.390 "block_size": 512, 00:13:07.390 "num_blocks": 65536, 00:13:07.390 "uuid": "1f048877-cb32-49cf-83be-21813ebb9e5d", 00:13:07.390 "assigned_rate_limits": { 00:13:07.390 "rw_ios_per_sec": 0, 00:13:07.390 "rw_mbytes_per_sec": 0, 00:13:07.390 "r_mbytes_per_sec": 0, 00:13:07.390 "w_mbytes_per_sec": 0 00:13:07.390 }, 00:13:07.390 "claimed": true, 00:13:07.390 "claim_type": "exclusive_write", 00:13:07.390 "zoned": false, 00:13:07.390 "supported_io_types": { 00:13:07.390 "read": true, 00:13:07.390 "write": true, 00:13:07.390 "unmap": true, 00:13:07.390 "flush": true, 00:13:07.390 "reset": true, 00:13:07.390 "nvme_admin": false, 00:13:07.390 "nvme_io": false, 00:13:07.390 "nvme_io_md": false, 00:13:07.390 "write_zeroes": true, 00:13:07.390 "zcopy": true, 00:13:07.390 "get_zone_info": false, 00:13:07.390 "zone_management": false, 00:13:07.390 "zone_append": false, 00:13:07.390 "compare": false, 00:13:07.390 "compare_and_write": false, 00:13:07.390 "abort": true, 00:13:07.390 "seek_hole": false, 00:13:07.390 "seek_data": false, 00:13:07.390 "copy": true, 00:13:07.390 "nvme_iov_md": false 00:13:07.390 }, 00:13:07.390 "memory_domains": [ 00:13:07.390 { 00:13:07.390 "dma_device_id": "system", 00:13:07.390 "dma_device_type": 1 00:13:07.390 }, 00:13:07.390 { 00:13:07.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.390 "dma_device_type": 2 00:13:07.390 } 00:13:07.390 ], 00:13:07.390 "driver_specific": {} 00:13:07.390 } 00:13:07.390 ] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.390 "name": "Existed_Raid", 00:13:07.390 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:07.390 "strip_size_kb": 0, 00:13:07.390 "state": "configuring", 00:13:07.390 "raid_level": "raid1", 00:13:07.390 "superblock": true, 00:13:07.390 "num_base_bdevs": 3, 00:13:07.390 "num_base_bdevs_discovered": 2, 00:13:07.390 "num_base_bdevs_operational": 3, 00:13:07.390 "base_bdevs_list": [ 00:13:07.390 { 00:13:07.390 "name": "BaseBdev1", 00:13:07.390 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:07.390 "is_configured": true, 00:13:07.390 "data_offset": 2048, 00:13:07.390 "data_size": 63488 00:13:07.390 }, 00:13:07.390 { 00:13:07.390 "name": "BaseBdev2", 00:13:07.390 "uuid": "1f048877-cb32-49cf-83be-21813ebb9e5d", 00:13:07.390 "is_configured": true, 00:13:07.390 "data_offset": 2048, 00:13:07.390 "data_size": 63488 00:13:07.390 }, 00:13:07.390 { 00:13:07.390 "name": "BaseBdev3", 00:13:07.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.390 "is_configured": false, 00:13:07.390 "data_offset": 0, 00:13:07.390 "data_size": 0 00:13:07.390 } 00:13:07.390 ] 00:13:07.390 }' 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.390 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.958 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:07.958 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.958 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.958 [2024-11-20 07:09:50.032934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.958 [2024-11-20 07:09:50.033251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:07.958 [2024-11-20 07:09:50.033278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.958 BaseBdev3 00:13:07.958 [2024-11-20 07:09:50.033733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:07.958 [2024-11-20 07:09:50.033901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:07.958 [2024-11-20 07:09:50.033914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:07.958 [2024-11-20 07:09:50.034071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.958 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.958 [ 00:13:07.958 { 00:13:07.958 "name": "BaseBdev3", 00:13:07.958 "aliases": [ 00:13:07.958 "3f6fa6a8-a8a1-4fb4-a1fe-bcf3472b57fd" 00:13:07.958 ], 00:13:07.958 "product_name": "Malloc disk", 00:13:07.958 "block_size": 512, 00:13:07.958 "num_blocks": 65536, 00:13:07.958 "uuid": "3f6fa6a8-a8a1-4fb4-a1fe-bcf3472b57fd", 00:13:07.958 "assigned_rate_limits": { 00:13:07.958 "rw_ios_per_sec": 0, 00:13:07.958 "rw_mbytes_per_sec": 0, 00:13:07.959 "r_mbytes_per_sec": 0, 00:13:07.959 "w_mbytes_per_sec": 0 00:13:07.959 }, 00:13:07.959 "claimed": true, 00:13:07.959 "claim_type": "exclusive_write", 00:13:07.959 "zoned": false, 00:13:07.959 "supported_io_types": { 00:13:07.959 "read": true, 00:13:07.959 "write": true, 00:13:07.959 "unmap": true, 00:13:07.959 "flush": true, 00:13:07.959 "reset": true, 00:13:07.959 "nvme_admin": false, 00:13:07.959 "nvme_io": false, 00:13:07.959 "nvme_io_md": false, 00:13:07.959 "write_zeroes": true, 00:13:07.959 "zcopy": true, 00:13:07.959 "get_zone_info": false, 00:13:07.959 "zone_management": false, 00:13:07.959 "zone_append": false, 00:13:07.959 "compare": false, 00:13:07.959 "compare_and_write": false, 00:13:07.959 "abort": true, 00:13:07.959 "seek_hole": false, 00:13:07.959 "seek_data": false, 00:13:07.959 "copy": true, 00:13:07.959 "nvme_iov_md": false 00:13:07.959 }, 00:13:07.959 "memory_domains": [ 00:13:07.959 { 00:13:07.959 "dma_device_id": "system", 00:13:07.959 "dma_device_type": 1 00:13:07.959 }, 00:13:07.959 { 00:13:07.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.959 "dma_device_type": 2 00:13:07.959 } 00:13:07.959 ], 00:13:07.959 "driver_specific": {} 00:13:07.959 } 00:13:07.959 ] 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.959 "name": "Existed_Raid", 00:13:07.959 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:07.959 "strip_size_kb": 0, 00:13:07.959 "state": "online", 00:13:07.959 "raid_level": "raid1", 00:13:07.959 "superblock": true, 00:13:07.959 "num_base_bdevs": 3, 00:13:07.959 "num_base_bdevs_discovered": 3, 00:13:07.959 "num_base_bdevs_operational": 3, 00:13:07.959 "base_bdevs_list": [ 00:13:07.959 { 00:13:07.959 "name": "BaseBdev1", 00:13:07.959 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:07.959 "is_configured": true, 00:13:07.959 "data_offset": 2048, 00:13:07.959 "data_size": 63488 00:13:07.959 }, 00:13:07.959 { 00:13:07.959 "name": "BaseBdev2", 00:13:07.959 "uuid": "1f048877-cb32-49cf-83be-21813ebb9e5d", 00:13:07.959 "is_configured": true, 00:13:07.959 "data_offset": 2048, 00:13:07.959 "data_size": 63488 00:13:07.959 }, 00:13:07.959 { 00:13:07.959 "name": "BaseBdev3", 00:13:07.959 "uuid": "3f6fa6a8-a8a1-4fb4-a1fe-bcf3472b57fd", 00:13:07.959 "is_configured": true, 00:13:07.959 "data_offset": 2048, 00:13:07.959 "data_size": 63488 00:13:07.959 } 00:13:07.959 ] 00:13:07.959 }' 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.959 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.526 [2024-11-20 07:09:50.516594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.526 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.526 "name": "Existed_Raid", 00:13:08.526 "aliases": [ 00:13:08.526 "0a722f38-88f4-4cdb-a6b3-797bb05e81ed" 00:13:08.526 ], 00:13:08.526 "product_name": "Raid Volume", 00:13:08.526 "block_size": 512, 00:13:08.526 "num_blocks": 63488, 00:13:08.526 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:08.526 "assigned_rate_limits": { 00:13:08.526 "rw_ios_per_sec": 0, 00:13:08.526 "rw_mbytes_per_sec": 0, 00:13:08.526 "r_mbytes_per_sec": 0, 00:13:08.526 "w_mbytes_per_sec": 0 00:13:08.526 }, 00:13:08.526 "claimed": false, 00:13:08.526 "zoned": false, 00:13:08.526 "supported_io_types": { 00:13:08.526 "read": true, 00:13:08.526 "write": true, 00:13:08.526 "unmap": false, 00:13:08.526 "flush": false, 00:13:08.526 "reset": true, 00:13:08.526 "nvme_admin": false, 00:13:08.526 "nvme_io": false, 00:13:08.526 "nvme_io_md": false, 00:13:08.526 "write_zeroes": true, 00:13:08.526 "zcopy": false, 00:13:08.526 "get_zone_info": false, 00:13:08.526 "zone_management": false, 00:13:08.526 "zone_append": false, 00:13:08.526 "compare": false, 00:13:08.526 "compare_and_write": false, 00:13:08.526 "abort": false, 00:13:08.526 "seek_hole": false, 00:13:08.526 "seek_data": false, 00:13:08.526 "copy": false, 00:13:08.526 "nvme_iov_md": false 00:13:08.526 }, 00:13:08.526 "memory_domains": [ 00:13:08.526 { 00:13:08.526 "dma_device_id": "system", 00:13:08.526 "dma_device_type": 1 00:13:08.526 }, 00:13:08.526 { 00:13:08.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.526 "dma_device_type": 2 00:13:08.526 }, 00:13:08.527 { 00:13:08.527 "dma_device_id": "system", 00:13:08.527 "dma_device_type": 1 00:13:08.527 }, 00:13:08.527 { 00:13:08.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.527 "dma_device_type": 2 00:13:08.527 }, 00:13:08.527 { 00:13:08.527 "dma_device_id": "system", 00:13:08.527 "dma_device_type": 1 00:13:08.527 }, 00:13:08.527 { 00:13:08.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.527 "dma_device_type": 2 00:13:08.527 } 00:13:08.527 ], 00:13:08.527 "driver_specific": { 00:13:08.527 "raid": { 00:13:08.527 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:08.527 "strip_size_kb": 0, 00:13:08.527 "state": "online", 00:13:08.527 "raid_level": "raid1", 00:13:08.527 "superblock": true, 00:13:08.527 "num_base_bdevs": 3, 00:13:08.527 "num_base_bdevs_discovered": 3, 00:13:08.527 "num_base_bdevs_operational": 3, 00:13:08.527 "base_bdevs_list": [ 00:13:08.527 { 00:13:08.527 "name": "BaseBdev1", 00:13:08.527 "uuid": "bea657f4-be97-4521-aa53-18c62ae69984", 00:13:08.527 "is_configured": true, 00:13:08.527 "data_offset": 2048, 00:13:08.527 "data_size": 63488 00:13:08.527 }, 00:13:08.527 { 00:13:08.527 "name": "BaseBdev2", 00:13:08.527 "uuid": "1f048877-cb32-49cf-83be-21813ebb9e5d", 00:13:08.527 "is_configured": true, 00:13:08.527 "data_offset": 2048, 00:13:08.527 "data_size": 63488 00:13:08.527 }, 00:13:08.527 { 00:13:08.527 "name": "BaseBdev3", 00:13:08.527 "uuid": "3f6fa6a8-a8a1-4fb4-a1fe-bcf3472b57fd", 00:13:08.527 "is_configured": true, 00:13:08.527 "data_offset": 2048, 00:13:08.527 "data_size": 63488 00:13:08.527 } 00:13:08.527 ] 00:13:08.527 } 00:13:08.527 } 00:13:08.527 }' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:08.527 BaseBdev2 00:13:08.527 BaseBdev3' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.527 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.527 [2024-11-20 07:09:50.775820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.787 "name": "Existed_Raid", 00:13:08.787 "uuid": "0a722f38-88f4-4cdb-a6b3-797bb05e81ed", 00:13:08.787 "strip_size_kb": 0, 00:13:08.787 "state": "online", 00:13:08.787 "raid_level": "raid1", 00:13:08.787 "superblock": true, 00:13:08.787 "num_base_bdevs": 3, 00:13:08.787 "num_base_bdevs_discovered": 2, 00:13:08.787 "num_base_bdevs_operational": 2, 00:13:08.787 "base_bdevs_list": [ 00:13:08.787 { 00:13:08.787 "name": null, 00:13:08.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.787 "is_configured": false, 00:13:08.787 "data_offset": 0, 00:13:08.787 "data_size": 63488 00:13:08.787 }, 00:13:08.787 { 00:13:08.787 "name": "BaseBdev2", 00:13:08.787 "uuid": "1f048877-cb32-49cf-83be-21813ebb9e5d", 00:13:08.787 "is_configured": true, 00:13:08.787 "data_offset": 2048, 00:13:08.787 "data_size": 63488 00:13:08.787 }, 00:13:08.787 { 00:13:08.787 "name": "BaseBdev3", 00:13:08.787 "uuid": "3f6fa6a8-a8a1-4fb4-a1fe-bcf3472b57fd", 00:13:08.787 "is_configured": true, 00:13:08.787 "data_offset": 2048, 00:13:08.787 "data_size": 63488 00:13:08.787 } 00:13:08.787 ] 00:13:08.787 }' 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.787 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 [2024-11-20 07:09:51.402583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.354 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 [2024-11-20 07:09:51.562401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:09.354 [2024-11-20 07:09:51.562511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.614 [2024-11-20 07:09:51.661841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.614 [2024-11-20 07:09:51.661913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.614 [2024-11-20 07:09:51.661927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.614 BaseBdev2 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:09.614 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 [ 00:13:09.615 { 00:13:09.615 "name": "BaseBdev2", 00:13:09.615 "aliases": [ 00:13:09.615 "cfdc75d4-67c7-4f05-b052-6be12ab517a0" 00:13:09.615 ], 00:13:09.615 "product_name": "Malloc disk", 00:13:09.615 "block_size": 512, 00:13:09.615 "num_blocks": 65536, 00:13:09.615 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:09.615 "assigned_rate_limits": { 00:13:09.615 "rw_ios_per_sec": 0, 00:13:09.615 "rw_mbytes_per_sec": 0, 00:13:09.615 "r_mbytes_per_sec": 0, 00:13:09.615 "w_mbytes_per_sec": 0 00:13:09.615 }, 00:13:09.615 "claimed": false, 00:13:09.615 "zoned": false, 00:13:09.615 "supported_io_types": { 00:13:09.615 "read": true, 00:13:09.615 "write": true, 00:13:09.615 "unmap": true, 00:13:09.615 "flush": true, 00:13:09.615 "reset": true, 00:13:09.615 "nvme_admin": false, 00:13:09.615 "nvme_io": false, 00:13:09.615 "nvme_io_md": false, 00:13:09.615 "write_zeroes": true, 00:13:09.615 "zcopy": true, 00:13:09.615 "get_zone_info": false, 00:13:09.615 "zone_management": false, 00:13:09.615 "zone_append": false, 00:13:09.615 "compare": false, 00:13:09.615 "compare_and_write": false, 00:13:09.615 "abort": true, 00:13:09.615 "seek_hole": false, 00:13:09.615 "seek_data": false, 00:13:09.615 "copy": true, 00:13:09.615 "nvme_iov_md": false 00:13:09.615 }, 00:13:09.615 "memory_domains": [ 00:13:09.615 { 00:13:09.615 "dma_device_id": "system", 00:13:09.615 "dma_device_type": 1 00:13:09.615 }, 00:13:09.615 { 00:13:09.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.615 "dma_device_type": 2 00:13:09.615 } 00:13:09.615 ], 00:13:09.615 "driver_specific": {} 00:13:09.615 } 00:13:09.615 ] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 BaseBdev3 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 [ 00:13:09.615 { 00:13:09.615 "name": "BaseBdev3", 00:13:09.615 "aliases": [ 00:13:09.615 "78dea956-a759-4124-801f-9d451a731317" 00:13:09.615 ], 00:13:09.615 "product_name": "Malloc disk", 00:13:09.615 "block_size": 512, 00:13:09.615 "num_blocks": 65536, 00:13:09.615 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:09.615 "assigned_rate_limits": { 00:13:09.615 "rw_ios_per_sec": 0, 00:13:09.615 "rw_mbytes_per_sec": 0, 00:13:09.615 "r_mbytes_per_sec": 0, 00:13:09.615 "w_mbytes_per_sec": 0 00:13:09.615 }, 00:13:09.615 "claimed": false, 00:13:09.615 "zoned": false, 00:13:09.615 "supported_io_types": { 00:13:09.615 "read": true, 00:13:09.615 "write": true, 00:13:09.615 "unmap": true, 00:13:09.615 "flush": true, 00:13:09.615 "reset": true, 00:13:09.615 "nvme_admin": false, 00:13:09.615 "nvme_io": false, 00:13:09.615 "nvme_io_md": false, 00:13:09.615 "write_zeroes": true, 00:13:09.615 "zcopy": true, 00:13:09.615 "get_zone_info": false, 00:13:09.615 "zone_management": false, 00:13:09.615 "zone_append": false, 00:13:09.615 "compare": false, 00:13:09.615 "compare_and_write": false, 00:13:09.615 "abort": true, 00:13:09.615 "seek_hole": false, 00:13:09.615 "seek_data": false, 00:13:09.615 "copy": true, 00:13:09.615 "nvme_iov_md": false 00:13:09.615 }, 00:13:09.615 "memory_domains": [ 00:13:09.615 { 00:13:09.615 "dma_device_id": "system", 00:13:09.615 "dma_device_type": 1 00:13:09.615 }, 00:13:09.615 { 00:13:09.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.615 "dma_device_type": 2 00:13:09.615 } 00:13:09.615 ], 00:13:09.615 "driver_specific": {} 00:13:09.615 } 00:13:09.615 ] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.615 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.615 [2024-11-20 07:09:51.876378] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.615 [2024-11-20 07:09:51.876426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.615 [2024-11-20 07:09:51.876467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.874 [2024-11-20 07:09:51.878700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.874 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.874 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:09.874 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.875 "name": "Existed_Raid", 00:13:09.875 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:09.875 "strip_size_kb": 0, 00:13:09.875 "state": "configuring", 00:13:09.875 "raid_level": "raid1", 00:13:09.875 "superblock": true, 00:13:09.875 "num_base_bdevs": 3, 00:13:09.875 "num_base_bdevs_discovered": 2, 00:13:09.875 "num_base_bdevs_operational": 3, 00:13:09.875 "base_bdevs_list": [ 00:13:09.875 { 00:13:09.875 "name": "BaseBdev1", 00:13:09.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.875 "is_configured": false, 00:13:09.875 "data_offset": 0, 00:13:09.875 "data_size": 0 00:13:09.875 }, 00:13:09.875 { 00:13:09.875 "name": "BaseBdev2", 00:13:09.875 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:09.875 "is_configured": true, 00:13:09.875 "data_offset": 2048, 00:13:09.875 "data_size": 63488 00:13:09.875 }, 00:13:09.875 { 00:13:09.875 "name": "BaseBdev3", 00:13:09.875 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:09.875 "is_configured": true, 00:13:09.875 "data_offset": 2048, 00:13:09.875 "data_size": 63488 00:13:09.875 } 00:13:09.875 ] 00:13:09.875 }' 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.875 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.134 [2024-11-20 07:09:52.343584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.134 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.393 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.393 "name": "Existed_Raid", 00:13:10.393 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:10.393 "strip_size_kb": 0, 00:13:10.393 "state": "configuring", 00:13:10.393 "raid_level": "raid1", 00:13:10.393 "superblock": true, 00:13:10.393 "num_base_bdevs": 3, 00:13:10.393 "num_base_bdevs_discovered": 1, 00:13:10.393 "num_base_bdevs_operational": 3, 00:13:10.393 "base_bdevs_list": [ 00:13:10.393 { 00:13:10.393 "name": "BaseBdev1", 00:13:10.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.393 "is_configured": false, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 0 00:13:10.393 }, 00:13:10.393 { 00:13:10.393 "name": null, 00:13:10.393 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:10.393 "is_configured": false, 00:13:10.393 "data_offset": 0, 00:13:10.393 "data_size": 63488 00:13:10.393 }, 00:13:10.393 { 00:13:10.393 "name": "BaseBdev3", 00:13:10.393 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:10.393 "is_configured": true, 00:13:10.393 "data_offset": 2048, 00:13:10.393 "data_size": 63488 00:13:10.393 } 00:13:10.393 ] 00:13:10.393 }' 00:13:10.393 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.393 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 [2024-11-20 07:09:52.840264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.652 BaseBdev1 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.652 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 [ 00:13:10.652 { 00:13:10.652 "name": "BaseBdev1", 00:13:10.652 "aliases": [ 00:13:10.652 "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff" 00:13:10.652 ], 00:13:10.652 "product_name": "Malloc disk", 00:13:10.652 "block_size": 512, 00:13:10.652 "num_blocks": 65536, 00:13:10.652 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:10.652 "assigned_rate_limits": { 00:13:10.652 "rw_ios_per_sec": 0, 00:13:10.652 "rw_mbytes_per_sec": 0, 00:13:10.652 "r_mbytes_per_sec": 0, 00:13:10.652 "w_mbytes_per_sec": 0 00:13:10.652 }, 00:13:10.652 "claimed": true, 00:13:10.652 "claim_type": "exclusive_write", 00:13:10.652 "zoned": false, 00:13:10.652 "supported_io_types": { 00:13:10.652 "read": true, 00:13:10.652 "write": true, 00:13:10.652 "unmap": true, 00:13:10.652 "flush": true, 00:13:10.652 "reset": true, 00:13:10.652 "nvme_admin": false, 00:13:10.652 "nvme_io": false, 00:13:10.652 "nvme_io_md": false, 00:13:10.652 "write_zeroes": true, 00:13:10.652 "zcopy": true, 00:13:10.652 "get_zone_info": false, 00:13:10.652 "zone_management": false, 00:13:10.652 "zone_append": false, 00:13:10.652 "compare": false, 00:13:10.652 "compare_and_write": false, 00:13:10.652 "abort": true, 00:13:10.653 "seek_hole": false, 00:13:10.653 "seek_data": false, 00:13:10.653 "copy": true, 00:13:10.653 "nvme_iov_md": false 00:13:10.653 }, 00:13:10.653 "memory_domains": [ 00:13:10.653 { 00:13:10.653 "dma_device_id": "system", 00:13:10.653 "dma_device_type": 1 00:13:10.653 }, 00:13:10.653 { 00:13:10.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.653 "dma_device_type": 2 00:13:10.653 } 00:13:10.653 ], 00:13:10.653 "driver_specific": {} 00:13:10.653 } 00:13:10.653 ] 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.653 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.912 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.912 "name": "Existed_Raid", 00:13:10.912 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:10.912 "strip_size_kb": 0, 00:13:10.912 "state": "configuring", 00:13:10.912 "raid_level": "raid1", 00:13:10.912 "superblock": true, 00:13:10.912 "num_base_bdevs": 3, 00:13:10.912 "num_base_bdevs_discovered": 2, 00:13:10.912 "num_base_bdevs_operational": 3, 00:13:10.912 "base_bdevs_list": [ 00:13:10.912 { 00:13:10.912 "name": "BaseBdev1", 00:13:10.912 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:10.912 "is_configured": true, 00:13:10.912 "data_offset": 2048, 00:13:10.912 "data_size": 63488 00:13:10.912 }, 00:13:10.912 { 00:13:10.912 "name": null, 00:13:10.912 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:10.912 "is_configured": false, 00:13:10.912 "data_offset": 0, 00:13:10.912 "data_size": 63488 00:13:10.912 }, 00:13:10.912 { 00:13:10.912 "name": "BaseBdev3", 00:13:10.912 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:10.912 "is_configured": true, 00:13:10.912 "data_offset": 2048, 00:13:10.912 "data_size": 63488 00:13:10.912 } 00:13:10.912 ] 00:13:10.912 }' 00:13:10.912 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.912 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 [2024-11-20 07:09:53.367443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.172 "name": "Existed_Raid", 00:13:11.172 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:11.172 "strip_size_kb": 0, 00:13:11.172 "state": "configuring", 00:13:11.172 "raid_level": "raid1", 00:13:11.172 "superblock": true, 00:13:11.172 "num_base_bdevs": 3, 00:13:11.172 "num_base_bdevs_discovered": 1, 00:13:11.172 "num_base_bdevs_operational": 3, 00:13:11.172 "base_bdevs_list": [ 00:13:11.172 { 00:13:11.172 "name": "BaseBdev1", 00:13:11.172 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:11.172 "is_configured": true, 00:13:11.172 "data_offset": 2048, 00:13:11.172 "data_size": 63488 00:13:11.172 }, 00:13:11.172 { 00:13:11.172 "name": null, 00:13:11.172 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:11.172 "is_configured": false, 00:13:11.172 "data_offset": 0, 00:13:11.172 "data_size": 63488 00:13:11.172 }, 00:13:11.172 { 00:13:11.172 "name": null, 00:13:11.172 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:11.172 "is_configured": false, 00:13:11.172 "data_offset": 0, 00:13:11.172 "data_size": 63488 00:13:11.172 } 00:13:11.172 ] 00:13:11.172 }' 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.172 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.740 [2024-11-20 07:09:53.862622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.740 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.740 "name": "Existed_Raid", 00:13:11.740 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:11.740 "strip_size_kb": 0, 00:13:11.740 "state": "configuring", 00:13:11.740 "raid_level": "raid1", 00:13:11.740 "superblock": true, 00:13:11.740 "num_base_bdevs": 3, 00:13:11.740 "num_base_bdevs_discovered": 2, 00:13:11.740 "num_base_bdevs_operational": 3, 00:13:11.740 "base_bdevs_list": [ 00:13:11.740 { 00:13:11.740 "name": "BaseBdev1", 00:13:11.740 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:11.740 "is_configured": true, 00:13:11.740 "data_offset": 2048, 00:13:11.740 "data_size": 63488 00:13:11.740 }, 00:13:11.740 { 00:13:11.740 "name": null, 00:13:11.740 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:11.740 "is_configured": false, 00:13:11.741 "data_offset": 0, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": "BaseBdev3", 00:13:11.741 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 } 00:13:11.741 ] 00:13:11.741 }' 00:13:11.741 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.741 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.309 [2024-11-20 07:09:54.377789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.309 "name": "Existed_Raid", 00:13:12.309 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:12.309 "strip_size_kb": 0, 00:13:12.309 "state": "configuring", 00:13:12.309 "raid_level": "raid1", 00:13:12.309 "superblock": true, 00:13:12.309 "num_base_bdevs": 3, 00:13:12.309 "num_base_bdevs_discovered": 1, 00:13:12.309 "num_base_bdevs_operational": 3, 00:13:12.309 "base_bdevs_list": [ 00:13:12.309 { 00:13:12.309 "name": null, 00:13:12.309 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:12.309 "is_configured": false, 00:13:12.309 "data_offset": 0, 00:13:12.309 "data_size": 63488 00:13:12.309 }, 00:13:12.309 { 00:13:12.309 "name": null, 00:13:12.309 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:12.309 "is_configured": false, 00:13:12.309 "data_offset": 0, 00:13:12.309 "data_size": 63488 00:13:12.309 }, 00:13:12.309 { 00:13:12.309 "name": "BaseBdev3", 00:13:12.309 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:12.309 "is_configured": true, 00:13:12.309 "data_offset": 2048, 00:13:12.309 "data_size": 63488 00:13:12.309 } 00:13:12.309 ] 00:13:12.309 }' 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.309 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.877 [2024-11-20 07:09:54.951161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.877 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.877 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.877 "name": "Existed_Raid", 00:13:12.878 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:12.878 "strip_size_kb": 0, 00:13:12.878 "state": "configuring", 00:13:12.878 "raid_level": "raid1", 00:13:12.878 "superblock": true, 00:13:12.878 "num_base_bdevs": 3, 00:13:12.878 "num_base_bdevs_discovered": 2, 00:13:12.878 "num_base_bdevs_operational": 3, 00:13:12.878 "base_bdevs_list": [ 00:13:12.878 { 00:13:12.878 "name": null, 00:13:12.878 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:12.878 "is_configured": false, 00:13:12.878 "data_offset": 0, 00:13:12.878 "data_size": 63488 00:13:12.878 }, 00:13:12.878 { 00:13:12.878 "name": "BaseBdev2", 00:13:12.878 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:12.878 "is_configured": true, 00:13:12.878 "data_offset": 2048, 00:13:12.878 "data_size": 63488 00:13:12.878 }, 00:13:12.878 { 00:13:12.878 "name": "BaseBdev3", 00:13:12.878 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:12.878 "is_configured": true, 00:13:12.878 "data_offset": 2048, 00:13:12.878 "data_size": 63488 00:13:12.878 } 00:13:12.878 ] 00:13:12.878 }' 00:13:12.878 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.878 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.137 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.137 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:13.137 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.137 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.137 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 [2024-11-20 07:09:55.479592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:13.396 [2024-11-20 07:09:55.479828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:13.396 [2024-11-20 07:09:55.479840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.396 [2024-11-20 07:09:55.480110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:13.396 [2024-11-20 07:09:55.480277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:13.396 [2024-11-20 07:09:55.480294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:13.396 [2024-11-20 07:09:55.480436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.396 NewBaseBdev 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 [ 00:13:13.396 { 00:13:13.396 "name": "NewBaseBdev", 00:13:13.396 "aliases": [ 00:13:13.396 "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff" 00:13:13.396 ], 00:13:13.396 "product_name": "Malloc disk", 00:13:13.396 "block_size": 512, 00:13:13.396 "num_blocks": 65536, 00:13:13.396 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:13.396 "assigned_rate_limits": { 00:13:13.396 "rw_ios_per_sec": 0, 00:13:13.396 "rw_mbytes_per_sec": 0, 00:13:13.396 "r_mbytes_per_sec": 0, 00:13:13.396 "w_mbytes_per_sec": 0 00:13:13.396 }, 00:13:13.396 "claimed": true, 00:13:13.396 "claim_type": "exclusive_write", 00:13:13.396 "zoned": false, 00:13:13.396 "supported_io_types": { 00:13:13.396 "read": true, 00:13:13.396 "write": true, 00:13:13.396 "unmap": true, 00:13:13.396 "flush": true, 00:13:13.396 "reset": true, 00:13:13.396 "nvme_admin": false, 00:13:13.396 "nvme_io": false, 00:13:13.396 "nvme_io_md": false, 00:13:13.396 "write_zeroes": true, 00:13:13.396 "zcopy": true, 00:13:13.396 "get_zone_info": false, 00:13:13.396 "zone_management": false, 00:13:13.396 "zone_append": false, 00:13:13.396 "compare": false, 00:13:13.396 "compare_and_write": false, 00:13:13.396 "abort": true, 00:13:13.396 "seek_hole": false, 00:13:13.396 "seek_data": false, 00:13:13.396 "copy": true, 00:13:13.396 "nvme_iov_md": false 00:13:13.396 }, 00:13:13.396 "memory_domains": [ 00:13:13.396 { 00:13:13.396 "dma_device_id": "system", 00:13:13.396 "dma_device_type": 1 00:13:13.396 }, 00:13:13.396 { 00:13:13.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.396 "dma_device_type": 2 00:13:13.396 } 00:13:13.396 ], 00:13:13.396 "driver_specific": {} 00:13:13.396 } 00:13:13.396 ] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.396 "name": "Existed_Raid", 00:13:13.396 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:13.396 "strip_size_kb": 0, 00:13:13.396 "state": "online", 00:13:13.396 "raid_level": "raid1", 00:13:13.396 "superblock": true, 00:13:13.396 "num_base_bdevs": 3, 00:13:13.396 "num_base_bdevs_discovered": 3, 00:13:13.396 "num_base_bdevs_operational": 3, 00:13:13.396 "base_bdevs_list": [ 00:13:13.396 { 00:13:13.396 "name": "NewBaseBdev", 00:13:13.396 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:13.396 "is_configured": true, 00:13:13.396 "data_offset": 2048, 00:13:13.396 "data_size": 63488 00:13:13.396 }, 00:13:13.396 { 00:13:13.396 "name": "BaseBdev2", 00:13:13.396 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:13.396 "is_configured": true, 00:13:13.396 "data_offset": 2048, 00:13:13.396 "data_size": 63488 00:13:13.396 }, 00:13:13.396 { 00:13:13.396 "name": "BaseBdev3", 00:13:13.396 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:13.396 "is_configured": true, 00:13:13.396 "data_offset": 2048, 00:13:13.396 "data_size": 63488 00:13:13.396 } 00:13:13.396 ] 00:13:13.396 }' 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.396 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.965 [2024-11-20 07:09:55.975111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.965 07:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.965 "name": "Existed_Raid", 00:13:13.965 "aliases": [ 00:13:13.965 "ebedd4d3-3c35-4385-8194-692d9227049a" 00:13:13.965 ], 00:13:13.965 "product_name": "Raid Volume", 00:13:13.965 "block_size": 512, 00:13:13.965 "num_blocks": 63488, 00:13:13.965 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:13.965 "assigned_rate_limits": { 00:13:13.965 "rw_ios_per_sec": 0, 00:13:13.965 "rw_mbytes_per_sec": 0, 00:13:13.965 "r_mbytes_per_sec": 0, 00:13:13.965 "w_mbytes_per_sec": 0 00:13:13.965 }, 00:13:13.965 "claimed": false, 00:13:13.965 "zoned": false, 00:13:13.965 "supported_io_types": { 00:13:13.965 "read": true, 00:13:13.965 "write": true, 00:13:13.965 "unmap": false, 00:13:13.965 "flush": false, 00:13:13.965 "reset": true, 00:13:13.965 "nvme_admin": false, 00:13:13.965 "nvme_io": false, 00:13:13.965 "nvme_io_md": false, 00:13:13.965 "write_zeroes": true, 00:13:13.965 "zcopy": false, 00:13:13.965 "get_zone_info": false, 00:13:13.965 "zone_management": false, 00:13:13.965 "zone_append": false, 00:13:13.965 "compare": false, 00:13:13.965 "compare_and_write": false, 00:13:13.965 "abort": false, 00:13:13.965 "seek_hole": false, 00:13:13.965 "seek_data": false, 00:13:13.965 "copy": false, 00:13:13.965 "nvme_iov_md": false 00:13:13.965 }, 00:13:13.965 "memory_domains": [ 00:13:13.965 { 00:13:13.965 "dma_device_id": "system", 00:13:13.965 "dma_device_type": 1 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.965 "dma_device_type": 2 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "dma_device_id": "system", 00:13:13.965 "dma_device_type": 1 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.965 "dma_device_type": 2 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "dma_device_id": "system", 00:13:13.965 "dma_device_type": 1 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.965 "dma_device_type": 2 00:13:13.965 } 00:13:13.965 ], 00:13:13.965 "driver_specific": { 00:13:13.965 "raid": { 00:13:13.965 "uuid": "ebedd4d3-3c35-4385-8194-692d9227049a", 00:13:13.965 "strip_size_kb": 0, 00:13:13.965 "state": "online", 00:13:13.965 "raid_level": "raid1", 00:13:13.965 "superblock": true, 00:13:13.965 "num_base_bdevs": 3, 00:13:13.965 "num_base_bdevs_discovered": 3, 00:13:13.965 "num_base_bdevs_operational": 3, 00:13:13.965 "base_bdevs_list": [ 00:13:13.965 { 00:13:13.965 "name": "NewBaseBdev", 00:13:13.965 "uuid": "5ab7fd7e-5da9-409a-b3e7-3ea1726cb4ff", 00:13:13.965 "is_configured": true, 00:13:13.965 "data_offset": 2048, 00:13:13.965 "data_size": 63488 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "name": "BaseBdev2", 00:13:13.965 "uuid": "cfdc75d4-67c7-4f05-b052-6be12ab517a0", 00:13:13.965 "is_configured": true, 00:13:13.965 "data_offset": 2048, 00:13:13.965 "data_size": 63488 00:13:13.965 }, 00:13:13.965 { 00:13:13.965 "name": "BaseBdev3", 00:13:13.965 "uuid": "78dea956-a759-4124-801f-9d451a731317", 00:13:13.965 "is_configured": true, 00:13:13.965 "data_offset": 2048, 00:13:13.965 "data_size": 63488 00:13:13.965 } 00:13:13.965 ] 00:13:13.965 } 00:13:13.965 } 00:13:13.965 }' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:13.965 BaseBdev2 00:13:13.965 BaseBdev3' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.965 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.966 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.966 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.966 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.966 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.224 [2024-11-20 07:09:56.266350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.224 [2024-11-20 07:09:56.266398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.224 [2024-11-20 07:09:56.266495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.224 [2024-11-20 07:09:56.266826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.224 [2024-11-20 07:09:56.266846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68333 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68333 ']' 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68333 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68333 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.224 killing process with pid 68333 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68333' 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68333 00:13:14.224 [2024-11-20 07:09:56.308248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.224 07:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68333 00:13:14.482 [2024-11-20 07:09:56.634821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.860 07:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:15.860 00:13:15.860 real 0m10.613s 00:13:15.860 user 0m16.835s 00:13:15.860 sys 0m1.839s 00:13:15.860 07:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.860 07:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 ************************************ 00:13:15.860 END TEST raid_state_function_test_sb 00:13:15.860 ************************************ 00:13:15.860 07:09:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:15.860 07:09:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:15.860 07:09:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.860 07:09:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 ************************************ 00:13:15.860 START TEST raid_superblock_test 00:13:15.860 ************************************ 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68953 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68953 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68953 ']' 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.860 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 [2024-11-20 07:09:57.961472] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:15.860 [2024-11-20 07:09:57.961597] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68953 ] 00:13:15.860 [2024-11-20 07:09:58.116410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.119 [2024-11-20 07:09:58.236604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.379 [2024-11-20 07:09:58.454163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.379 [2024-11-20 07:09:58.454199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.637 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.638 malloc1 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.638 [2024-11-20 07:09:58.873432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:16.638 [2024-11-20 07:09:58.873514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.638 [2024-11-20 07:09:58.873538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:16.638 [2024-11-20 07:09:58.873547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.638 [2024-11-20 07:09:58.875737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.638 [2024-11-20 07:09:58.875782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:16.638 pt1 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.638 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.897 malloc2 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.897 [2024-11-20 07:09:58.929967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.897 [2024-11-20 07:09:58.930047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.897 [2024-11-20 07:09:58.930073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:16.897 [2024-11-20 07:09:58.930083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.897 [2024-11-20 07:09:58.932349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.897 [2024-11-20 07:09:58.932402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.897 pt2 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.897 malloc3 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.897 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:16.898 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.898 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.898 [2024-11-20 07:09:59.002582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:16.898 [2024-11-20 07:09:59.002655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.898 [2024-11-20 07:09:59.002679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:16.898 [2024-11-20 07:09:59.002689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.898 [2024-11-20 07:09:59.004864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.898 [2024-11-20 07:09:59.004907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:16.898 pt3 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.898 [2024-11-20 07:09:59.014643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:16.898 [2024-11-20 07:09:59.016599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.898 [2024-11-20 07:09:59.016677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:16.898 [2024-11-20 07:09:59.016851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:16.898 [2024-11-20 07:09:59.016878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.898 [2024-11-20 07:09:59.017199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.898 [2024-11-20 07:09:59.017439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:16.898 [2024-11-20 07:09:59.017460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:16.898 [2024-11-20 07:09:59.017647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.898 "name": "raid_bdev1", 00:13:16.898 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:16.898 "strip_size_kb": 0, 00:13:16.898 "state": "online", 00:13:16.898 "raid_level": "raid1", 00:13:16.898 "superblock": true, 00:13:16.898 "num_base_bdevs": 3, 00:13:16.898 "num_base_bdevs_discovered": 3, 00:13:16.898 "num_base_bdevs_operational": 3, 00:13:16.898 "base_bdevs_list": [ 00:13:16.898 { 00:13:16.898 "name": "pt1", 00:13:16.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:16.898 "is_configured": true, 00:13:16.898 "data_offset": 2048, 00:13:16.898 "data_size": 63488 00:13:16.898 }, 00:13:16.898 { 00:13:16.898 "name": "pt2", 00:13:16.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.898 "is_configured": true, 00:13:16.898 "data_offset": 2048, 00:13:16.898 "data_size": 63488 00:13:16.898 }, 00:13:16.898 { 00:13:16.898 "name": "pt3", 00:13:16.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.898 "is_configured": true, 00:13:16.898 "data_offset": 2048, 00:13:16.898 "data_size": 63488 00:13:16.898 } 00:13:16.898 ] 00:13:16.898 }' 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.898 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 [2024-11-20 07:09:59.486077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:17.469 "name": "raid_bdev1", 00:13:17.469 "aliases": [ 00:13:17.469 "68460551-982f-45a4-9b21-5616a3256495" 00:13:17.469 ], 00:13:17.469 "product_name": "Raid Volume", 00:13:17.469 "block_size": 512, 00:13:17.469 "num_blocks": 63488, 00:13:17.469 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:17.469 "assigned_rate_limits": { 00:13:17.469 "rw_ios_per_sec": 0, 00:13:17.469 "rw_mbytes_per_sec": 0, 00:13:17.469 "r_mbytes_per_sec": 0, 00:13:17.469 "w_mbytes_per_sec": 0 00:13:17.469 }, 00:13:17.469 "claimed": false, 00:13:17.469 "zoned": false, 00:13:17.469 "supported_io_types": { 00:13:17.469 "read": true, 00:13:17.469 "write": true, 00:13:17.469 "unmap": false, 00:13:17.469 "flush": false, 00:13:17.469 "reset": true, 00:13:17.469 "nvme_admin": false, 00:13:17.469 "nvme_io": false, 00:13:17.469 "nvme_io_md": false, 00:13:17.469 "write_zeroes": true, 00:13:17.469 "zcopy": false, 00:13:17.469 "get_zone_info": false, 00:13:17.469 "zone_management": false, 00:13:17.469 "zone_append": false, 00:13:17.469 "compare": false, 00:13:17.469 "compare_and_write": false, 00:13:17.469 "abort": false, 00:13:17.469 "seek_hole": false, 00:13:17.469 "seek_data": false, 00:13:17.469 "copy": false, 00:13:17.469 "nvme_iov_md": false 00:13:17.469 }, 00:13:17.469 "memory_domains": [ 00:13:17.469 { 00:13:17.469 "dma_device_id": "system", 00:13:17.469 "dma_device_type": 1 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.469 "dma_device_type": 2 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "dma_device_id": "system", 00:13:17.469 "dma_device_type": 1 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.469 "dma_device_type": 2 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "dma_device_id": "system", 00:13:17.469 "dma_device_type": 1 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.469 "dma_device_type": 2 00:13:17.469 } 00:13:17.469 ], 00:13:17.469 "driver_specific": { 00:13:17.469 "raid": { 00:13:17.469 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:17.469 "strip_size_kb": 0, 00:13:17.469 "state": "online", 00:13:17.469 "raid_level": "raid1", 00:13:17.469 "superblock": true, 00:13:17.469 "num_base_bdevs": 3, 00:13:17.469 "num_base_bdevs_discovered": 3, 00:13:17.469 "num_base_bdevs_operational": 3, 00:13:17.469 "base_bdevs_list": [ 00:13:17.469 { 00:13:17.469 "name": "pt1", 00:13:17.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.469 "is_configured": true, 00:13:17.469 "data_offset": 2048, 00:13:17.469 "data_size": 63488 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "name": "pt2", 00:13:17.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.469 "is_configured": true, 00:13:17.469 "data_offset": 2048, 00:13:17.469 "data_size": 63488 00:13:17.469 }, 00:13:17.469 { 00:13:17.469 "name": "pt3", 00:13:17.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.469 "is_configured": true, 00:13:17.469 "data_offset": 2048, 00:13:17.469 "data_size": 63488 00:13:17.469 } 00:13:17.469 ] 00:13:17.469 } 00:13:17.469 } 00:13:17.469 }' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:17.469 pt2 00:13:17.469 pt3' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.469 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 [2024-11-20 07:09:59.785647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=68460551-982f-45a4-9b21-5616a3256495 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 68460551-982f-45a4-9b21-5616a3256495 ']' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 [2024-11-20 07:09:59.833269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.728 [2024-11-20 07:09:59.833304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.728 [2024-11-20 07:09:59.833416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.728 [2024-11-20 07:09:59.833503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.728 [2024-11-20 07:09:59.833513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.728 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 [2024-11-20 07:09:59.969020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:17.728 [2024-11-20 07:09:59.970995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:17.728 [2024-11-20 07:09:59.971051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:17.728 [2024-11-20 07:09:59.971099] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:17.729 [2024-11-20 07:09:59.971147] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:17.729 [2024-11-20 07:09:59.971166] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:17.729 [2024-11-20 07:09:59.971183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.729 [2024-11-20 07:09:59.971192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:17.729 request: 00:13:17.729 { 00:13:17.729 "name": "raid_bdev1", 00:13:17.729 "raid_level": "raid1", 00:13:17.729 "base_bdevs": [ 00:13:17.729 "malloc1", 00:13:17.729 "malloc2", 00:13:17.729 "malloc3" 00:13:17.729 ], 00:13:17.729 "superblock": false, 00:13:17.729 "method": "bdev_raid_create", 00:13:17.729 "req_id": 1 00:13:17.729 } 00:13:17.729 Got JSON-RPC error response 00:13:17.729 response: 00:13:17.729 { 00:13:17.729 "code": -17, 00:13:17.729 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:17.729 } 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.729 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.987 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.988 [2024-11-20 07:10:00.024883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.988 [2024-11-20 07:10:00.024945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.988 [2024-11-20 07:10:00.024974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:17.988 [2024-11-20 07:10:00.024984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.988 [2024-11-20 07:10:00.027327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.988 [2024-11-20 07:10:00.027386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.988 [2024-11-20 07:10:00.027479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:17.988 [2024-11-20 07:10:00.027527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:17.988 pt1 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.988 "name": "raid_bdev1", 00:13:17.988 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:17.988 "strip_size_kb": 0, 00:13:17.988 "state": "configuring", 00:13:17.988 "raid_level": "raid1", 00:13:17.988 "superblock": true, 00:13:17.988 "num_base_bdevs": 3, 00:13:17.988 "num_base_bdevs_discovered": 1, 00:13:17.988 "num_base_bdevs_operational": 3, 00:13:17.988 "base_bdevs_list": [ 00:13:17.988 { 00:13:17.988 "name": "pt1", 00:13:17.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.988 "is_configured": true, 00:13:17.988 "data_offset": 2048, 00:13:17.988 "data_size": 63488 00:13:17.988 }, 00:13:17.988 { 00:13:17.988 "name": null, 00:13:17.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.988 "is_configured": false, 00:13:17.988 "data_offset": 2048, 00:13:17.988 "data_size": 63488 00:13:17.988 }, 00:13:17.988 { 00:13:17.988 "name": null, 00:13:17.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.988 "is_configured": false, 00:13:17.988 "data_offset": 2048, 00:13:17.988 "data_size": 63488 00:13:17.988 } 00:13:17.988 ] 00:13:17.988 }' 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.988 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.245 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:18.245 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:18.245 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.245 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.245 [2024-11-20 07:10:00.440255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:18.245 [2024-11-20 07:10:00.440371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.245 [2024-11-20 07:10:00.440403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:18.245 [2024-11-20 07:10:00.440419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.245 [2024-11-20 07:10:00.441002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.245 [2024-11-20 07:10:00.441048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:18.245 [2024-11-20 07:10:00.441178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:18.245 [2024-11-20 07:10:00.441222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:18.245 pt2 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.246 [2024-11-20 07:10:00.448265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.246 "name": "raid_bdev1", 00:13:18.246 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:18.246 "strip_size_kb": 0, 00:13:18.246 "state": "configuring", 00:13:18.246 "raid_level": "raid1", 00:13:18.246 "superblock": true, 00:13:18.246 "num_base_bdevs": 3, 00:13:18.246 "num_base_bdevs_discovered": 1, 00:13:18.246 "num_base_bdevs_operational": 3, 00:13:18.246 "base_bdevs_list": [ 00:13:18.246 { 00:13:18.246 "name": "pt1", 00:13:18.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.246 "is_configured": true, 00:13:18.246 "data_offset": 2048, 00:13:18.246 "data_size": 63488 00:13:18.246 }, 00:13:18.246 { 00:13:18.246 "name": null, 00:13:18.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.246 "is_configured": false, 00:13:18.246 "data_offset": 0, 00:13:18.246 "data_size": 63488 00:13:18.246 }, 00:13:18.246 { 00:13:18.246 "name": null, 00:13:18.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.246 "is_configured": false, 00:13:18.246 "data_offset": 2048, 00:13:18.246 "data_size": 63488 00:13:18.246 } 00:13:18.246 ] 00:13:18.246 }' 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.246 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 [2024-11-20 07:10:00.923425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:18.814 [2024-11-20 07:10:00.923495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.814 [2024-11-20 07:10:00.923516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:18.814 [2024-11-20 07:10:00.923528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.814 [2024-11-20 07:10:00.924050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.814 [2024-11-20 07:10:00.924084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:18.814 [2024-11-20 07:10:00.924172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:18.814 [2024-11-20 07:10:00.924219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:18.814 pt2 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 [2024-11-20 07:10:00.935373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:18.814 [2024-11-20 07:10:00.935430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.814 [2024-11-20 07:10:00.935452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:18.814 [2024-11-20 07:10:00.935465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.814 [2024-11-20 07:10:00.935912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.814 [2024-11-20 07:10:00.935942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:18.814 [2024-11-20 07:10:00.936020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:18.814 [2024-11-20 07:10:00.936046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:18.814 [2024-11-20 07:10:00.936203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:18.814 [2024-11-20 07:10:00.936228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.814 [2024-11-20 07:10:00.936498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:18.814 [2024-11-20 07:10:00.936655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:18.814 [2024-11-20 07:10:00.936669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:18.814 [2024-11-20 07:10:00.936816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.814 pt3 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.814 "name": "raid_bdev1", 00:13:18.814 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:18.814 "strip_size_kb": 0, 00:13:18.814 "state": "online", 00:13:18.814 "raid_level": "raid1", 00:13:18.814 "superblock": true, 00:13:18.814 "num_base_bdevs": 3, 00:13:18.814 "num_base_bdevs_discovered": 3, 00:13:18.814 "num_base_bdevs_operational": 3, 00:13:18.814 "base_bdevs_list": [ 00:13:18.814 { 00:13:18.814 "name": "pt1", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "name": "pt2", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "name": "pt3", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 } 00:13:18.814 ] 00:13:18.814 }' 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.814 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.382 [2024-11-20 07:10:01.370953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.382 "name": "raid_bdev1", 00:13:19.382 "aliases": [ 00:13:19.382 "68460551-982f-45a4-9b21-5616a3256495" 00:13:19.382 ], 00:13:19.382 "product_name": "Raid Volume", 00:13:19.382 "block_size": 512, 00:13:19.382 "num_blocks": 63488, 00:13:19.382 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:19.382 "assigned_rate_limits": { 00:13:19.382 "rw_ios_per_sec": 0, 00:13:19.382 "rw_mbytes_per_sec": 0, 00:13:19.382 "r_mbytes_per_sec": 0, 00:13:19.382 "w_mbytes_per_sec": 0 00:13:19.382 }, 00:13:19.382 "claimed": false, 00:13:19.382 "zoned": false, 00:13:19.382 "supported_io_types": { 00:13:19.382 "read": true, 00:13:19.382 "write": true, 00:13:19.382 "unmap": false, 00:13:19.382 "flush": false, 00:13:19.382 "reset": true, 00:13:19.382 "nvme_admin": false, 00:13:19.382 "nvme_io": false, 00:13:19.382 "nvme_io_md": false, 00:13:19.382 "write_zeroes": true, 00:13:19.382 "zcopy": false, 00:13:19.382 "get_zone_info": false, 00:13:19.382 "zone_management": false, 00:13:19.382 "zone_append": false, 00:13:19.382 "compare": false, 00:13:19.382 "compare_and_write": false, 00:13:19.382 "abort": false, 00:13:19.382 "seek_hole": false, 00:13:19.382 "seek_data": false, 00:13:19.382 "copy": false, 00:13:19.382 "nvme_iov_md": false 00:13:19.382 }, 00:13:19.382 "memory_domains": [ 00:13:19.382 { 00:13:19.382 "dma_device_id": "system", 00:13:19.382 "dma_device_type": 1 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.382 "dma_device_type": 2 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "dma_device_id": "system", 00:13:19.382 "dma_device_type": 1 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.382 "dma_device_type": 2 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "dma_device_id": "system", 00:13:19.382 "dma_device_type": 1 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.382 "dma_device_type": 2 00:13:19.382 } 00:13:19.382 ], 00:13:19.382 "driver_specific": { 00:13:19.382 "raid": { 00:13:19.382 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:19.382 "strip_size_kb": 0, 00:13:19.382 "state": "online", 00:13:19.382 "raid_level": "raid1", 00:13:19.382 "superblock": true, 00:13:19.382 "num_base_bdevs": 3, 00:13:19.382 "num_base_bdevs_discovered": 3, 00:13:19.382 "num_base_bdevs_operational": 3, 00:13:19.382 "base_bdevs_list": [ 00:13:19.382 { 00:13:19.382 "name": "pt1", 00:13:19.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.382 "is_configured": true, 00:13:19.382 "data_offset": 2048, 00:13:19.382 "data_size": 63488 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "name": "pt2", 00:13:19.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.382 "is_configured": true, 00:13:19.382 "data_offset": 2048, 00:13:19.382 "data_size": 63488 00:13:19.382 }, 00:13:19.382 { 00:13:19.382 "name": "pt3", 00:13:19.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.382 "is_configured": true, 00:13:19.382 "data_offset": 2048, 00:13:19.382 "data_size": 63488 00:13:19.382 } 00:13:19.382 ] 00:13:19.382 } 00:13:19.382 } 00:13:19.382 }' 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.382 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:19.382 pt2 00:13:19.382 pt3' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.383 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 [2024-11-20 07:10:01.626586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 68460551-982f-45a4-9b21-5616a3256495 '!=' 68460551-982f-45a4-9b21-5616a3256495 ']' 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.642 [2024-11-20 07:10:01.658285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.642 "name": "raid_bdev1", 00:13:19.642 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:19.642 "strip_size_kb": 0, 00:13:19.642 "state": "online", 00:13:19.642 "raid_level": "raid1", 00:13:19.642 "superblock": true, 00:13:19.642 "num_base_bdevs": 3, 00:13:19.642 "num_base_bdevs_discovered": 2, 00:13:19.642 "num_base_bdevs_operational": 2, 00:13:19.642 "base_bdevs_list": [ 00:13:19.642 { 00:13:19.642 "name": null, 00:13:19.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.642 "is_configured": false, 00:13:19.642 "data_offset": 0, 00:13:19.642 "data_size": 63488 00:13:19.642 }, 00:13:19.642 { 00:13:19.642 "name": "pt2", 00:13:19.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.642 "is_configured": true, 00:13:19.642 "data_offset": 2048, 00:13:19.642 "data_size": 63488 00:13:19.642 }, 00:13:19.642 { 00:13:19.642 "name": "pt3", 00:13:19.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.642 "is_configured": true, 00:13:19.642 "data_offset": 2048, 00:13:19.642 "data_size": 63488 00:13:19.642 } 00:13:19.642 ] 00:13:19.642 }' 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.642 07:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.901 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.901 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.901 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.901 [2024-11-20 07:10:02.157358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.901 [2024-11-20 07:10:02.157391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.901 [2024-11-20 07:10:02.157473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.901 [2024-11-20 07:10:02.157534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.901 [2024-11-20 07:10:02.157551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:19.901 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.162 [2024-11-20 07:10:02.241208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:20.162 [2024-11-20 07:10:02.241281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.162 [2024-11-20 07:10:02.241299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:20.162 [2024-11-20 07:10:02.241311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.162 [2024-11-20 07:10:02.243636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.162 [2024-11-20 07:10:02.243676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:20.162 [2024-11-20 07:10:02.243754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:20.162 [2024-11-20 07:10:02.243824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:20.162 pt2 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.162 "name": "raid_bdev1", 00:13:20.162 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:20.162 "strip_size_kb": 0, 00:13:20.162 "state": "configuring", 00:13:20.162 "raid_level": "raid1", 00:13:20.162 "superblock": true, 00:13:20.162 "num_base_bdevs": 3, 00:13:20.162 "num_base_bdevs_discovered": 1, 00:13:20.162 "num_base_bdevs_operational": 2, 00:13:20.162 "base_bdevs_list": [ 00:13:20.162 { 00:13:20.162 "name": null, 00:13:20.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.162 "is_configured": false, 00:13:20.162 "data_offset": 2048, 00:13:20.162 "data_size": 63488 00:13:20.162 }, 00:13:20.162 { 00:13:20.162 "name": "pt2", 00:13:20.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.162 "is_configured": true, 00:13:20.162 "data_offset": 2048, 00:13:20.162 "data_size": 63488 00:13:20.162 }, 00:13:20.162 { 00:13:20.162 "name": null, 00:13:20.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.162 "is_configured": false, 00:13:20.162 "data_offset": 2048, 00:13:20.162 "data_size": 63488 00:13:20.162 } 00:13:20.162 ] 00:13:20.162 }' 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.162 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.731 [2024-11-20 07:10:02.700487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:20.731 [2024-11-20 07:10:02.700562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.731 [2024-11-20 07:10:02.700583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:20.731 [2024-11-20 07:10:02.700593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.731 [2024-11-20 07:10:02.701090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.731 [2024-11-20 07:10:02.701122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:20.731 [2024-11-20 07:10:02.701248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:20.731 [2024-11-20 07:10:02.701288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:20.731 [2024-11-20 07:10:02.701429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:20.731 [2024-11-20 07:10:02.701448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.731 [2024-11-20 07:10:02.701744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:20.731 [2024-11-20 07:10:02.701933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:20.731 [2024-11-20 07:10:02.701952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:20.731 [2024-11-20 07:10:02.702111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.731 pt3 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.731 "name": "raid_bdev1", 00:13:20.731 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:20.731 "strip_size_kb": 0, 00:13:20.731 "state": "online", 00:13:20.731 "raid_level": "raid1", 00:13:20.731 "superblock": true, 00:13:20.731 "num_base_bdevs": 3, 00:13:20.731 "num_base_bdevs_discovered": 2, 00:13:20.731 "num_base_bdevs_operational": 2, 00:13:20.731 "base_bdevs_list": [ 00:13:20.731 { 00:13:20.731 "name": null, 00:13:20.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.731 "is_configured": false, 00:13:20.731 "data_offset": 2048, 00:13:20.731 "data_size": 63488 00:13:20.731 }, 00:13:20.731 { 00:13:20.731 "name": "pt2", 00:13:20.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.731 "is_configured": true, 00:13:20.731 "data_offset": 2048, 00:13:20.731 "data_size": 63488 00:13:20.731 }, 00:13:20.731 { 00:13:20.731 "name": "pt3", 00:13:20.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.731 "is_configured": true, 00:13:20.731 "data_offset": 2048, 00:13:20.731 "data_size": 63488 00:13:20.731 } 00:13:20.731 ] 00:13:20.731 }' 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.731 07:10:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 [2024-11-20 07:10:03.123734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.992 [2024-11-20 07:10:03.123770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.992 [2024-11-20 07:10:03.123857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.992 [2024-11-20 07:10:03.123923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.992 [2024-11-20 07:10:03.123937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 [2024-11-20 07:10:03.195621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:20.992 [2024-11-20 07:10:03.195684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.992 [2024-11-20 07:10:03.195704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:20.992 [2024-11-20 07:10:03.195713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.992 [2024-11-20 07:10:03.198095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.992 [2024-11-20 07:10:03.198135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:20.992 [2024-11-20 07:10:03.198220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:20.992 [2024-11-20 07:10:03.198271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:20.992 [2024-11-20 07:10:03.198435] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:20.992 [2024-11-20 07:10:03.198451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.992 [2024-11-20 07:10:03.198469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:20.992 [2024-11-20 07:10:03.198532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:20.992 pt1 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.992 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.992 "name": "raid_bdev1", 00:13:20.992 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:20.992 "strip_size_kb": 0, 00:13:20.993 "state": "configuring", 00:13:20.993 "raid_level": "raid1", 00:13:20.993 "superblock": true, 00:13:20.993 "num_base_bdevs": 3, 00:13:20.993 "num_base_bdevs_discovered": 1, 00:13:20.993 "num_base_bdevs_operational": 2, 00:13:20.993 "base_bdevs_list": [ 00:13:20.993 { 00:13:20.993 "name": null, 00:13:20.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.993 "is_configured": false, 00:13:20.993 "data_offset": 2048, 00:13:20.993 "data_size": 63488 00:13:20.993 }, 00:13:20.993 { 00:13:20.993 "name": "pt2", 00:13:20.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.993 "is_configured": true, 00:13:20.993 "data_offset": 2048, 00:13:20.993 "data_size": 63488 00:13:20.993 }, 00:13:20.993 { 00:13:20.993 "name": null, 00:13:20.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.993 "is_configured": false, 00:13:20.993 "data_offset": 2048, 00:13:20.993 "data_size": 63488 00:13:20.993 } 00:13:20.993 ] 00:13:20.993 }' 00:13:20.993 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.993 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.562 [2024-11-20 07:10:03.742753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:21.562 [2024-11-20 07:10:03.742828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.562 [2024-11-20 07:10:03.742850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:21.562 [2024-11-20 07:10:03.742860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.562 [2024-11-20 07:10:03.743371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.562 [2024-11-20 07:10:03.743398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:21.562 [2024-11-20 07:10:03.743488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:21.562 [2024-11-20 07:10:03.743535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:21.562 [2024-11-20 07:10:03.743692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:21.562 [2024-11-20 07:10:03.743710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:21.562 [2024-11-20 07:10:03.743981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:21.562 [2024-11-20 07:10:03.744160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:21.562 [2024-11-20 07:10:03.744200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:21.562 [2024-11-20 07:10:03.744379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.562 pt3 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.562 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.562 "name": "raid_bdev1", 00:13:21.562 "uuid": "68460551-982f-45a4-9b21-5616a3256495", 00:13:21.562 "strip_size_kb": 0, 00:13:21.562 "state": "online", 00:13:21.562 "raid_level": "raid1", 00:13:21.562 "superblock": true, 00:13:21.562 "num_base_bdevs": 3, 00:13:21.562 "num_base_bdevs_discovered": 2, 00:13:21.562 "num_base_bdevs_operational": 2, 00:13:21.562 "base_bdevs_list": [ 00:13:21.562 { 00:13:21.562 "name": null, 00:13:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.562 "is_configured": false, 00:13:21.562 "data_offset": 2048, 00:13:21.562 "data_size": 63488 00:13:21.562 }, 00:13:21.562 { 00:13:21.562 "name": "pt2", 00:13:21.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.562 "is_configured": true, 00:13:21.562 "data_offset": 2048, 00:13:21.562 "data_size": 63488 00:13:21.562 }, 00:13:21.562 { 00:13:21.562 "name": "pt3", 00:13:21.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.562 "is_configured": true, 00:13:21.562 "data_offset": 2048, 00:13:21.563 "data_size": 63488 00:13:21.563 } 00:13:21.563 ] 00:13:21.563 }' 00:13:21.563 07:10:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.563 07:10:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:22.128 [2024-11-20 07:10:04.226241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 68460551-982f-45a4-9b21-5616a3256495 '!=' 68460551-982f-45a4-9b21-5616a3256495 ']' 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68953 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68953 ']' 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68953 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68953 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68953' 00:13:22.128 killing process with pid 68953 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68953 00:13:22.128 [2024-11-20 07:10:04.309049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.128 [2024-11-20 07:10:04.309165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.128 [2024-11-20 07:10:04.309232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.128 [2024-11-20 07:10:04.309246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:22.128 07:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68953 00:13:22.386 [2024-11-20 07:10:04.641111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.763 07:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:23.763 00:13:23.763 real 0m7.982s 00:13:23.763 user 0m12.493s 00:13:23.763 sys 0m1.376s 00:13:23.763 07:10:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.763 07:10:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.763 ************************************ 00:13:23.763 END TEST raid_superblock_test 00:13:23.763 ************************************ 00:13:23.763 07:10:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:23.763 07:10:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:23.763 07:10:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.763 07:10:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.763 ************************************ 00:13:23.763 START TEST raid_read_error_test 00:13:23.763 ************************************ 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:23.763 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ojyaTRz25W 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69399 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69399 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69399 ']' 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:23.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.764 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.764 [2024-11-20 07:10:06.015764] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:23.764 [2024-11-20 07:10:06.015902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69399 ] 00:13:24.022 [2024-11-20 07:10:06.191877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.280 [2024-11-20 07:10:06.322902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.280 [2024-11-20 07:10:06.532179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.280 [2024-11-20 07:10:06.532211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 BaseBdev1_malloc 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 true 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 [2024-11-20 07:10:06.959800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:24.849 [2024-11-20 07:10:06.959857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.849 [2024-11-20 07:10:06.959879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:24.849 [2024-11-20 07:10:06.959891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.849 [2024-11-20 07:10:06.962321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.849 [2024-11-20 07:10:06.962395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.849 BaseBdev1 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 BaseBdev2_malloc 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 true 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 [2024-11-20 07:10:07.027330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:24.849 [2024-11-20 07:10:07.027415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.849 [2024-11-20 07:10:07.027440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:24.849 [2024-11-20 07:10:07.027452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.849 [2024-11-20 07:10:07.029924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.849 [2024-11-20 07:10:07.029969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.849 BaseBdev2 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 BaseBdev3_malloc 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 true 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.849 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.849 [2024-11-20 07:10:07.108312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:24.849 [2024-11-20 07:10:07.108388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.849 [2024-11-20 07:10:07.108408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:24.849 [2024-11-20 07:10:07.108420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.849 [2024-11-20 07:10:07.110793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.849 [2024-11-20 07:10:07.110832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.107 BaseBdev3 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.107 [2024-11-20 07:10:07.120367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.107 [2024-11-20 07:10:07.122294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.107 [2024-11-20 07:10:07.122396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.107 [2024-11-20 07:10:07.122639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:25.107 [2024-11-20 07:10:07.122659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.107 [2024-11-20 07:10:07.122938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:25.107 [2024-11-20 07:10:07.123135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:25.107 [2024-11-20 07:10:07.123157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:25.107 [2024-11-20 07:10:07.123328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.107 "name": "raid_bdev1", 00:13:25.107 "uuid": "2155590d-a000-4ac9-b855-d15bb0b804c4", 00:13:25.107 "strip_size_kb": 0, 00:13:25.107 "state": "online", 00:13:25.107 "raid_level": "raid1", 00:13:25.107 "superblock": true, 00:13:25.107 "num_base_bdevs": 3, 00:13:25.107 "num_base_bdevs_discovered": 3, 00:13:25.107 "num_base_bdevs_operational": 3, 00:13:25.107 "base_bdevs_list": [ 00:13:25.107 { 00:13:25.107 "name": "BaseBdev1", 00:13:25.107 "uuid": "449a5c5a-1c4b-5d5d-be18-d127d2251a18", 00:13:25.107 "is_configured": true, 00:13:25.107 "data_offset": 2048, 00:13:25.107 "data_size": 63488 00:13:25.107 }, 00:13:25.107 { 00:13:25.107 "name": "BaseBdev2", 00:13:25.107 "uuid": "c30a5d32-77d1-5bec-9df3-7f04a493336a", 00:13:25.107 "is_configured": true, 00:13:25.107 "data_offset": 2048, 00:13:25.107 "data_size": 63488 00:13:25.107 }, 00:13:25.107 { 00:13:25.107 "name": "BaseBdev3", 00:13:25.107 "uuid": "a2df8153-1db2-50f6-8257-08c52b3aca52", 00:13:25.107 "is_configured": true, 00:13:25.107 "data_offset": 2048, 00:13:25.107 "data_size": 63488 00:13:25.107 } 00:13:25.107 ] 00:13:25.107 }' 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.107 07:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.366 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:25.366 07:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:25.627 [2024-11-20 07:10:07.633167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.565 "name": "raid_bdev1", 00:13:26.565 "uuid": "2155590d-a000-4ac9-b855-d15bb0b804c4", 00:13:26.565 "strip_size_kb": 0, 00:13:26.565 "state": "online", 00:13:26.565 "raid_level": "raid1", 00:13:26.565 "superblock": true, 00:13:26.565 "num_base_bdevs": 3, 00:13:26.565 "num_base_bdevs_discovered": 3, 00:13:26.565 "num_base_bdevs_operational": 3, 00:13:26.565 "base_bdevs_list": [ 00:13:26.565 { 00:13:26.565 "name": "BaseBdev1", 00:13:26.565 "uuid": "449a5c5a-1c4b-5d5d-be18-d127d2251a18", 00:13:26.565 "is_configured": true, 00:13:26.565 "data_offset": 2048, 00:13:26.565 "data_size": 63488 00:13:26.565 }, 00:13:26.565 { 00:13:26.565 "name": "BaseBdev2", 00:13:26.565 "uuid": "c30a5d32-77d1-5bec-9df3-7f04a493336a", 00:13:26.565 "is_configured": true, 00:13:26.565 "data_offset": 2048, 00:13:26.565 "data_size": 63488 00:13:26.565 }, 00:13:26.565 { 00:13:26.565 "name": "BaseBdev3", 00:13:26.565 "uuid": "a2df8153-1db2-50f6-8257-08c52b3aca52", 00:13:26.565 "is_configured": true, 00:13:26.565 "data_offset": 2048, 00:13:26.565 "data_size": 63488 00:13:26.565 } 00:13:26.565 ] 00:13:26.565 }' 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.565 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 [2024-11-20 07:10:08.995067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.824 [2024-11-20 07:10:08.995105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.824 [2024-11-20 07:10:08.998156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.824 [2024-11-20 07:10:08.998213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.824 [2024-11-20 07:10:08.998324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.824 [2024-11-20 07:10:08.998351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:26.824 { 00:13:26.824 "results": [ 00:13:26.824 { 00:13:26.824 "job": "raid_bdev1", 00:13:26.824 "core_mask": "0x1", 00:13:26.824 "workload": "randrw", 00:13:26.824 "percentage": 50, 00:13:26.824 "status": "finished", 00:13:26.824 "queue_depth": 1, 00:13:26.824 "io_size": 131072, 00:13:26.824 "runtime": 1.362538, 00:13:26.824 "iops": 12280.758408205862, 00:13:26.824 "mibps": 1535.0948010257327, 00:13:26.824 "io_failed": 0, 00:13:26.824 "io_timeout": 0, 00:13:26.824 "avg_latency_us": 78.51828374597487, 00:13:26.824 "min_latency_us": 25.3764192139738, 00:13:26.824 "max_latency_us": 1709.9458515283843 00:13:26.824 } 00:13:26.824 ], 00:13:26.824 "core_count": 1 00:13:26.824 } 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69399 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69399 ']' 00:13:26.824 07:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69399 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69399 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.824 killing process with pid 69399 00:13:26.824 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69399' 00:13:26.825 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69399 00:13:26.825 07:10:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69399 00:13:26.825 [2024-11-20 07:10:09.036986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.095 [2024-11-20 07:10:09.286993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ojyaTRz25W 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:28.472 07:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:28.472 00:13:28.472 real 0m4.702s 00:13:28.472 user 0m5.555s 00:13:28.472 sys 0m0.548s 00:13:28.473 07:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.473 07:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 ************************************ 00:13:28.473 END TEST raid_read_error_test 00:13:28.473 ************************************ 00:13:28.473 07:10:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:28.473 07:10:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:28.473 07:10:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.473 07:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.473 ************************************ 00:13:28.473 START TEST raid_write_error_test 00:13:28.473 ************************************ 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LS6LIFzA3d 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69550 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69550 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69550 ']' 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.473 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.731 [2024-11-20 07:10:10.791784] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:28.731 [2024-11-20 07:10:10.791901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69550 ] 00:13:28.731 [2024-11-20 07:10:10.971104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.036 [2024-11-20 07:10:11.098215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.315 [2024-11-20 07:10:11.315208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.315 [2024-11-20 07:10:11.315269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.572 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.572 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.572 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 BaseBdev1_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 true 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 [2024-11-20 07:10:11.740028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:29.573 [2024-11-20 07:10:11.740088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.573 [2024-11-20 07:10:11.740113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:29.573 [2024-11-20 07:10:11.740127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.573 [2024-11-20 07:10:11.742593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.573 [2024-11-20 07:10:11.742633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.573 BaseBdev1 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 BaseBdev2_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 true 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 [2024-11-20 07:10:11.812795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:29.573 [2024-11-20 07:10:11.812865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.573 [2024-11-20 07:10:11.812891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:29.573 [2024-11-20 07:10:11.812904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.573 [2024-11-20 07:10:11.815376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.573 [2024-11-20 07:10:11.815417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.573 BaseBdev2 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.573 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.851 BaseBdev3_malloc 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.851 true 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.851 [2024-11-20 07:10:11.896885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:29.851 [2024-11-20 07:10:11.896955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.851 [2024-11-20 07:10:11.896979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:29.851 [2024-11-20 07:10:11.896991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.851 [2024-11-20 07:10:11.899411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.851 [2024-11-20 07:10:11.899451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.851 BaseBdev3 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.851 [2024-11-20 07:10:11.908963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.851 [2024-11-20 07:10:11.911012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.851 [2024-11-20 07:10:11.911103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.851 [2024-11-20 07:10:11.911351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.851 [2024-11-20 07:10:11.911373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.851 [2024-11-20 07:10:11.911681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:29.851 [2024-11-20 07:10:11.911886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.851 [2024-11-20 07:10:11.911906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:29.851 [2024-11-20 07:10:11.912121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.851 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.852 "name": "raid_bdev1", 00:13:29.852 "uuid": "f77412b3-1bc7-4f95-9095-5b96dcbdf208", 00:13:29.852 "strip_size_kb": 0, 00:13:29.852 "state": "online", 00:13:29.852 "raid_level": "raid1", 00:13:29.852 "superblock": true, 00:13:29.852 "num_base_bdevs": 3, 00:13:29.852 "num_base_bdevs_discovered": 3, 00:13:29.852 "num_base_bdevs_operational": 3, 00:13:29.852 "base_bdevs_list": [ 00:13:29.852 { 00:13:29.852 "name": "BaseBdev1", 00:13:29.852 "uuid": "4fdb596b-a62f-5cf1-a9e7-a7d7731b151c", 00:13:29.852 "is_configured": true, 00:13:29.852 "data_offset": 2048, 00:13:29.852 "data_size": 63488 00:13:29.852 }, 00:13:29.852 { 00:13:29.852 "name": "BaseBdev2", 00:13:29.852 "uuid": "0025c394-c8d6-5911-b011-5e295f5dfd99", 00:13:29.852 "is_configured": true, 00:13:29.852 "data_offset": 2048, 00:13:29.852 "data_size": 63488 00:13:29.852 }, 00:13:29.852 { 00:13:29.852 "name": "BaseBdev3", 00:13:29.852 "uuid": "43a2dbe0-2813-5a54-8710-8969c07b29ee", 00:13:29.852 "is_configured": true, 00:13:29.852 "data_offset": 2048, 00:13:29.852 "data_size": 63488 00:13:29.852 } 00:13:29.852 ] 00:13:29.852 }' 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.852 07:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.111 07:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:30.111 07:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:30.370 [2024-11-20 07:10:12.465601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:31.306 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:31.306 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.306 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.306 [2024-11-20 07:10:13.369170] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:31.306 [2024-11-20 07:10:13.369221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.306 [2024-11-20 07:10:13.369445] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.307 "name": "raid_bdev1", 00:13:31.307 "uuid": "f77412b3-1bc7-4f95-9095-5b96dcbdf208", 00:13:31.307 "strip_size_kb": 0, 00:13:31.307 "state": "online", 00:13:31.307 "raid_level": "raid1", 00:13:31.307 "superblock": true, 00:13:31.307 "num_base_bdevs": 3, 00:13:31.307 "num_base_bdevs_discovered": 2, 00:13:31.307 "num_base_bdevs_operational": 2, 00:13:31.307 "base_bdevs_list": [ 00:13:31.307 { 00:13:31.307 "name": null, 00:13:31.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.307 "is_configured": false, 00:13:31.307 "data_offset": 0, 00:13:31.307 "data_size": 63488 00:13:31.307 }, 00:13:31.307 { 00:13:31.307 "name": "BaseBdev2", 00:13:31.307 "uuid": "0025c394-c8d6-5911-b011-5e295f5dfd99", 00:13:31.307 "is_configured": true, 00:13:31.307 "data_offset": 2048, 00:13:31.307 "data_size": 63488 00:13:31.307 }, 00:13:31.307 { 00:13:31.307 "name": "BaseBdev3", 00:13:31.307 "uuid": "43a2dbe0-2813-5a54-8710-8969c07b29ee", 00:13:31.307 "is_configured": true, 00:13:31.307 "data_offset": 2048, 00:13:31.307 "data_size": 63488 00:13:31.307 } 00:13:31.307 ] 00:13:31.307 }' 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.307 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.566 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.566 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.566 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.825 [2024-11-20 07:10:13.832319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.825 [2024-11-20 07:10:13.832372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.825 [2024-11-20 07:10:13.835483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.825 [2024-11-20 07:10:13.835552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.825 [2024-11-20 07:10:13.835639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.825 [2024-11-20 07:10:13.835660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:31.825 { 00:13:31.825 "results": [ 00:13:31.825 { 00:13:31.825 "job": "raid_bdev1", 00:13:31.825 "core_mask": "0x1", 00:13:31.825 "workload": "randrw", 00:13:31.825 "percentage": 50, 00:13:31.825 "status": "finished", 00:13:31.825 "queue_depth": 1, 00:13:31.825 "io_size": 131072, 00:13:31.825 "runtime": 1.367387, 00:13:31.825 "iops": 13364.906935637095, 00:13:31.825 "mibps": 1670.6133669546368, 00:13:31.825 "io_failed": 0, 00:13:31.825 "io_timeout": 0, 00:13:31.825 "avg_latency_us": 71.7828196345259, 00:13:31.825 "min_latency_us": 25.7117903930131, 00:13:31.825 "max_latency_us": 1531.0812227074236 00:13:31.825 } 00:13:31.825 ], 00:13:31.825 "core_count": 1 00:13:31.825 } 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69550 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69550 ']' 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69550 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69550 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.825 killing process with pid 69550 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69550' 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69550 00:13:31.825 [2024-11-20 07:10:13.880684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.825 07:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69550 00:13:32.085 [2024-11-20 07:10:14.146713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LS6LIFzA3d 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:33.472 00:13:33.472 real 0m4.774s 00:13:33.472 user 0m5.676s 00:13:33.472 sys 0m0.576s 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.472 07:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 ************************************ 00:13:33.472 END TEST raid_write_error_test 00:13:33.472 ************************************ 00:13:33.472 07:10:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:33.472 07:10:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:33.472 07:10:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:33.472 07:10:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:33.472 07:10:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.472 07:10:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 ************************************ 00:13:33.472 START TEST raid_state_function_test 00:13:33.472 ************************************ 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69688 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:33.472 Process raid pid: 69688 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69688' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69688 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69688 ']' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.472 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 [2024-11-20 07:10:15.615314] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:33.472 [2024-11-20 07:10:15.615465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.730 [2024-11-20 07:10:15.793202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.730 [2024-11-20 07:10:15.923949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.989 [2024-11-20 07:10:16.151543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.989 [2024-11-20 07:10:16.151591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.247 [2024-11-20 07:10:16.475117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.247 [2024-11-20 07:10:16.475173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.247 [2024-11-20 07:10:16.475184] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.247 [2024-11-20 07:10:16.475193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.247 [2024-11-20 07:10:16.475217] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.247 [2024-11-20 07:10:16.475226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.247 [2024-11-20 07:10:16.475233] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.247 [2024-11-20 07:10:16.475243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.247 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.248 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.505 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.505 "name": "Existed_Raid", 00:13:34.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.505 "strip_size_kb": 64, 00:13:34.505 "state": "configuring", 00:13:34.505 "raid_level": "raid0", 00:13:34.505 "superblock": false, 00:13:34.505 "num_base_bdevs": 4, 00:13:34.505 "num_base_bdevs_discovered": 0, 00:13:34.505 "num_base_bdevs_operational": 4, 00:13:34.505 "base_bdevs_list": [ 00:13:34.505 { 00:13:34.505 "name": "BaseBdev1", 00:13:34.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.505 "is_configured": false, 00:13:34.505 "data_offset": 0, 00:13:34.505 "data_size": 0 00:13:34.505 }, 00:13:34.505 { 00:13:34.505 "name": "BaseBdev2", 00:13:34.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.505 "is_configured": false, 00:13:34.505 "data_offset": 0, 00:13:34.505 "data_size": 0 00:13:34.505 }, 00:13:34.505 { 00:13:34.505 "name": "BaseBdev3", 00:13:34.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.506 "is_configured": false, 00:13:34.506 "data_offset": 0, 00:13:34.506 "data_size": 0 00:13:34.506 }, 00:13:34.506 { 00:13:34.506 "name": "BaseBdev4", 00:13:34.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.506 "is_configured": false, 00:13:34.506 "data_offset": 0, 00:13:34.506 "data_size": 0 00:13:34.506 } 00:13:34.506 ] 00:13:34.506 }' 00:13:34.506 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.506 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 [2024-11-20 07:10:16.918382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.764 [2024-11-20 07:10:16.918429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 [2024-11-20 07:10:16.930363] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.764 [2024-11-20 07:10:16.930408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.764 [2024-11-20 07:10:16.930418] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.764 [2024-11-20 07:10:16.930429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.764 [2024-11-20 07:10:16.930436] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.764 [2024-11-20 07:10:16.930446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.764 [2024-11-20 07:10:16.930452] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.764 [2024-11-20 07:10:16.930462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 [2024-11-20 07:10:16.974141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.764 BaseBdev1 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 [ 00:13:34.764 { 00:13:34.764 "name": "BaseBdev1", 00:13:34.764 "aliases": [ 00:13:34.764 "3149d795-50f5-4c4f-91fe-df5739168fd4" 00:13:34.764 ], 00:13:34.764 "product_name": "Malloc disk", 00:13:34.764 "block_size": 512, 00:13:34.764 "num_blocks": 65536, 00:13:34.764 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:34.764 "assigned_rate_limits": { 00:13:34.764 "rw_ios_per_sec": 0, 00:13:34.764 "rw_mbytes_per_sec": 0, 00:13:34.764 "r_mbytes_per_sec": 0, 00:13:34.764 "w_mbytes_per_sec": 0 00:13:34.764 }, 00:13:34.764 "claimed": true, 00:13:34.764 "claim_type": "exclusive_write", 00:13:34.764 "zoned": false, 00:13:34.764 "supported_io_types": { 00:13:34.764 "read": true, 00:13:34.764 "write": true, 00:13:34.764 "unmap": true, 00:13:34.764 "flush": true, 00:13:34.764 "reset": true, 00:13:34.764 "nvme_admin": false, 00:13:34.764 "nvme_io": false, 00:13:34.764 "nvme_io_md": false, 00:13:34.764 "write_zeroes": true, 00:13:34.764 "zcopy": true, 00:13:34.764 "get_zone_info": false, 00:13:34.764 "zone_management": false, 00:13:34.764 "zone_append": false, 00:13:34.764 "compare": false, 00:13:34.764 "compare_and_write": false, 00:13:34.764 "abort": true, 00:13:34.764 "seek_hole": false, 00:13:34.764 "seek_data": false, 00:13:34.764 "copy": true, 00:13:34.764 "nvme_iov_md": false 00:13:34.764 }, 00:13:34.764 "memory_domains": [ 00:13:34.764 { 00:13:34.764 "dma_device_id": "system", 00:13:34.764 "dma_device_type": 1 00:13:34.764 }, 00:13:34.764 { 00:13:34.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.764 "dma_device_type": 2 00:13:34.764 } 00:13:34.764 ], 00:13:34.764 "driver_specific": {} 00:13:34.764 } 00:13:34.764 ] 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.022 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.022 "name": "Existed_Raid", 00:13:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.022 "strip_size_kb": 64, 00:13:35.022 "state": "configuring", 00:13:35.022 "raid_level": "raid0", 00:13:35.022 "superblock": false, 00:13:35.022 "num_base_bdevs": 4, 00:13:35.022 "num_base_bdevs_discovered": 1, 00:13:35.022 "num_base_bdevs_operational": 4, 00:13:35.022 "base_bdevs_list": [ 00:13:35.022 { 00:13:35.022 "name": "BaseBdev1", 00:13:35.022 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:35.022 "is_configured": true, 00:13:35.022 "data_offset": 0, 00:13:35.022 "data_size": 65536 00:13:35.022 }, 00:13:35.022 { 00:13:35.022 "name": "BaseBdev2", 00:13:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.022 "is_configured": false, 00:13:35.022 "data_offset": 0, 00:13:35.022 "data_size": 0 00:13:35.023 }, 00:13:35.023 { 00:13:35.023 "name": "BaseBdev3", 00:13:35.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.023 "is_configured": false, 00:13:35.023 "data_offset": 0, 00:13:35.023 "data_size": 0 00:13:35.023 }, 00:13:35.023 { 00:13:35.023 "name": "BaseBdev4", 00:13:35.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.023 "is_configured": false, 00:13:35.023 "data_offset": 0, 00:13:35.023 "data_size": 0 00:13:35.023 } 00:13:35.023 ] 00:13:35.023 }' 00:13:35.023 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.023 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.282 [2024-11-20 07:10:17.461387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.282 [2024-11-20 07:10:17.461455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.282 [2024-11-20 07:10:17.473434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.282 [2024-11-20 07:10:17.475407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.282 [2024-11-20 07:10:17.475444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.282 [2024-11-20 07:10:17.475454] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.282 [2024-11-20 07:10:17.475465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.282 [2024-11-20 07:10:17.475472] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:35.282 [2024-11-20 07:10:17.475480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.282 "name": "Existed_Raid", 00:13:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.282 "strip_size_kb": 64, 00:13:35.282 "state": "configuring", 00:13:35.282 "raid_level": "raid0", 00:13:35.282 "superblock": false, 00:13:35.282 "num_base_bdevs": 4, 00:13:35.282 "num_base_bdevs_discovered": 1, 00:13:35.282 "num_base_bdevs_operational": 4, 00:13:35.282 "base_bdevs_list": [ 00:13:35.282 { 00:13:35.282 "name": "BaseBdev1", 00:13:35.282 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:35.282 "is_configured": true, 00:13:35.282 "data_offset": 0, 00:13:35.282 "data_size": 65536 00:13:35.282 }, 00:13:35.282 { 00:13:35.282 "name": "BaseBdev2", 00:13:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.282 "is_configured": false, 00:13:35.282 "data_offset": 0, 00:13:35.282 "data_size": 0 00:13:35.282 }, 00:13:35.282 { 00:13:35.282 "name": "BaseBdev3", 00:13:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.282 "is_configured": false, 00:13:35.282 "data_offset": 0, 00:13:35.282 "data_size": 0 00:13:35.282 }, 00:13:35.282 { 00:13:35.282 "name": "BaseBdev4", 00:13:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.282 "is_configured": false, 00:13:35.282 "data_offset": 0, 00:13:35.282 "data_size": 0 00:13:35.282 } 00:13:35.282 ] 00:13:35.282 }' 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.282 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 [2024-11-20 07:10:17.986058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.850 BaseBdev2 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.850 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 [ 00:13:35.850 { 00:13:35.850 "name": "BaseBdev2", 00:13:35.850 "aliases": [ 00:13:35.850 "daac3523-7d1f-4220-9db9-8784f22b4297" 00:13:35.850 ], 00:13:35.850 "product_name": "Malloc disk", 00:13:35.850 "block_size": 512, 00:13:35.850 "num_blocks": 65536, 00:13:35.850 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:35.850 "assigned_rate_limits": { 00:13:35.850 "rw_ios_per_sec": 0, 00:13:35.850 "rw_mbytes_per_sec": 0, 00:13:35.850 "r_mbytes_per_sec": 0, 00:13:35.850 "w_mbytes_per_sec": 0 00:13:35.850 }, 00:13:35.850 "claimed": true, 00:13:35.850 "claim_type": "exclusive_write", 00:13:35.850 "zoned": false, 00:13:35.850 "supported_io_types": { 00:13:35.850 "read": true, 00:13:35.850 "write": true, 00:13:35.850 "unmap": true, 00:13:35.850 "flush": true, 00:13:35.850 "reset": true, 00:13:35.850 "nvme_admin": false, 00:13:35.850 "nvme_io": false, 00:13:35.850 "nvme_io_md": false, 00:13:35.850 "write_zeroes": true, 00:13:35.850 "zcopy": true, 00:13:35.850 "get_zone_info": false, 00:13:35.850 "zone_management": false, 00:13:35.850 "zone_append": false, 00:13:35.850 "compare": false, 00:13:35.850 "compare_and_write": false, 00:13:35.850 "abort": true, 00:13:35.850 "seek_hole": false, 00:13:35.850 "seek_data": false, 00:13:35.850 "copy": true, 00:13:35.850 "nvme_iov_md": false 00:13:35.850 }, 00:13:35.850 "memory_domains": [ 00:13:35.850 { 00:13:35.850 "dma_device_id": "system", 00:13:35.850 "dma_device_type": 1 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.850 "dma_device_type": 2 00:13:35.850 } 00:13:35.850 ], 00:13:35.850 "driver_specific": {} 00:13:35.850 } 00:13:35.850 ] 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.850 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.850 "name": "Existed_Raid", 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.850 "strip_size_kb": 64, 00:13:35.850 "state": "configuring", 00:13:35.850 "raid_level": "raid0", 00:13:35.850 "superblock": false, 00:13:35.850 "num_base_bdevs": 4, 00:13:35.850 "num_base_bdevs_discovered": 2, 00:13:35.850 "num_base_bdevs_operational": 4, 00:13:35.850 "base_bdevs_list": [ 00:13:35.850 { 00:13:35.850 "name": "BaseBdev1", 00:13:35.850 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:35.850 "is_configured": true, 00:13:35.850 "data_offset": 0, 00:13:35.850 "data_size": 65536 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "name": "BaseBdev2", 00:13:35.850 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:35.850 "is_configured": true, 00:13:35.850 "data_offset": 0, 00:13:35.850 "data_size": 65536 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "name": "BaseBdev3", 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.850 "is_configured": false, 00:13:35.850 "data_offset": 0, 00:13:35.850 "data_size": 0 00:13:35.850 }, 00:13:35.850 { 00:13:35.850 "name": "BaseBdev4", 00:13:35.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.850 "is_configured": false, 00:13:35.850 "data_offset": 0, 00:13:35.850 "data_size": 0 00:13:35.850 } 00:13:35.850 ] 00:13:35.850 }' 00:13:35.851 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.851 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 [2024-11-20 07:10:18.509566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.418 BaseBdev3 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 [ 00:13:36.418 { 00:13:36.418 "name": "BaseBdev3", 00:13:36.418 "aliases": [ 00:13:36.418 "ba891e7b-4342-48b8-b004-020cbe20f9e9" 00:13:36.418 ], 00:13:36.418 "product_name": "Malloc disk", 00:13:36.418 "block_size": 512, 00:13:36.418 "num_blocks": 65536, 00:13:36.418 "uuid": "ba891e7b-4342-48b8-b004-020cbe20f9e9", 00:13:36.418 "assigned_rate_limits": { 00:13:36.418 "rw_ios_per_sec": 0, 00:13:36.418 "rw_mbytes_per_sec": 0, 00:13:36.418 "r_mbytes_per_sec": 0, 00:13:36.418 "w_mbytes_per_sec": 0 00:13:36.418 }, 00:13:36.418 "claimed": true, 00:13:36.418 "claim_type": "exclusive_write", 00:13:36.418 "zoned": false, 00:13:36.418 "supported_io_types": { 00:13:36.418 "read": true, 00:13:36.418 "write": true, 00:13:36.418 "unmap": true, 00:13:36.418 "flush": true, 00:13:36.418 "reset": true, 00:13:36.418 "nvme_admin": false, 00:13:36.418 "nvme_io": false, 00:13:36.418 "nvme_io_md": false, 00:13:36.418 "write_zeroes": true, 00:13:36.418 "zcopy": true, 00:13:36.418 "get_zone_info": false, 00:13:36.418 "zone_management": false, 00:13:36.418 "zone_append": false, 00:13:36.418 "compare": false, 00:13:36.418 "compare_and_write": false, 00:13:36.418 "abort": true, 00:13:36.418 "seek_hole": false, 00:13:36.418 "seek_data": false, 00:13:36.418 "copy": true, 00:13:36.418 "nvme_iov_md": false 00:13:36.418 }, 00:13:36.418 "memory_domains": [ 00:13:36.418 { 00:13:36.418 "dma_device_id": "system", 00:13:36.418 "dma_device_type": 1 00:13:36.418 }, 00:13:36.418 { 00:13:36.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.418 "dma_device_type": 2 00:13:36.418 } 00:13:36.418 ], 00:13:36.418 "driver_specific": {} 00:13:36.418 } 00:13:36.418 ] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.418 "name": "Existed_Raid", 00:13:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.418 "strip_size_kb": 64, 00:13:36.418 "state": "configuring", 00:13:36.418 "raid_level": "raid0", 00:13:36.418 "superblock": false, 00:13:36.418 "num_base_bdevs": 4, 00:13:36.418 "num_base_bdevs_discovered": 3, 00:13:36.418 "num_base_bdevs_operational": 4, 00:13:36.418 "base_bdevs_list": [ 00:13:36.418 { 00:13:36.418 "name": "BaseBdev1", 00:13:36.418 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:36.418 "is_configured": true, 00:13:36.418 "data_offset": 0, 00:13:36.418 "data_size": 65536 00:13:36.418 }, 00:13:36.418 { 00:13:36.418 "name": "BaseBdev2", 00:13:36.418 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:36.418 "is_configured": true, 00:13:36.418 "data_offset": 0, 00:13:36.418 "data_size": 65536 00:13:36.418 }, 00:13:36.418 { 00:13:36.418 "name": "BaseBdev3", 00:13:36.418 "uuid": "ba891e7b-4342-48b8-b004-020cbe20f9e9", 00:13:36.418 "is_configured": true, 00:13:36.418 "data_offset": 0, 00:13:36.418 "data_size": 65536 00:13:36.418 }, 00:13:36.418 { 00:13:36.418 "name": "BaseBdev4", 00:13:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.418 "is_configured": false, 00:13:36.418 "data_offset": 0, 00:13:36.418 "data_size": 0 00:13:36.418 } 00:13:36.418 ] 00:13:36.418 }' 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.418 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.985 [2024-11-20 07:10:19.070258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:36.985 [2024-11-20 07:10:19.070323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:36.985 [2024-11-20 07:10:19.070356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:36.985 [2024-11-20 07:10:19.070664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:36.985 [2024-11-20 07:10:19.070880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:36.985 [2024-11-20 07:10:19.070904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:36.985 [2024-11-20 07:10:19.071202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.985 BaseBdev4 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.985 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.985 [ 00:13:36.985 { 00:13:36.985 "name": "BaseBdev4", 00:13:36.985 "aliases": [ 00:13:36.985 "b097e23f-31d9-4b96-8ee2-744acb100dde" 00:13:36.985 ], 00:13:36.985 "product_name": "Malloc disk", 00:13:36.985 "block_size": 512, 00:13:36.985 "num_blocks": 65536, 00:13:36.985 "uuid": "b097e23f-31d9-4b96-8ee2-744acb100dde", 00:13:36.985 "assigned_rate_limits": { 00:13:36.985 "rw_ios_per_sec": 0, 00:13:36.985 "rw_mbytes_per_sec": 0, 00:13:36.985 "r_mbytes_per_sec": 0, 00:13:36.985 "w_mbytes_per_sec": 0 00:13:36.985 }, 00:13:36.985 "claimed": true, 00:13:36.985 "claim_type": "exclusive_write", 00:13:36.985 "zoned": false, 00:13:36.985 "supported_io_types": { 00:13:36.985 "read": true, 00:13:36.985 "write": true, 00:13:36.986 "unmap": true, 00:13:36.986 "flush": true, 00:13:36.986 "reset": true, 00:13:36.986 "nvme_admin": false, 00:13:36.986 "nvme_io": false, 00:13:36.986 "nvme_io_md": false, 00:13:36.986 "write_zeroes": true, 00:13:36.986 "zcopy": true, 00:13:36.986 "get_zone_info": false, 00:13:36.986 "zone_management": false, 00:13:36.986 "zone_append": false, 00:13:36.986 "compare": false, 00:13:36.986 "compare_and_write": false, 00:13:36.986 "abort": true, 00:13:36.986 "seek_hole": false, 00:13:36.986 "seek_data": false, 00:13:36.986 "copy": true, 00:13:36.986 "nvme_iov_md": false 00:13:36.986 }, 00:13:36.986 "memory_domains": [ 00:13:36.986 { 00:13:36.986 "dma_device_id": "system", 00:13:36.986 "dma_device_type": 1 00:13:36.986 }, 00:13:36.986 { 00:13:36.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.986 "dma_device_type": 2 00:13:36.986 } 00:13:36.986 ], 00:13:36.986 "driver_specific": {} 00:13:36.986 } 00:13:36.986 ] 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.986 "name": "Existed_Raid", 00:13:36.986 "uuid": "66cb89db-028c-44ae-8983-fe2878d170e7", 00:13:36.986 "strip_size_kb": 64, 00:13:36.986 "state": "online", 00:13:36.986 "raid_level": "raid0", 00:13:36.986 "superblock": false, 00:13:36.986 "num_base_bdevs": 4, 00:13:36.986 "num_base_bdevs_discovered": 4, 00:13:36.986 "num_base_bdevs_operational": 4, 00:13:36.986 "base_bdevs_list": [ 00:13:36.986 { 00:13:36.986 "name": "BaseBdev1", 00:13:36.986 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:36.986 "is_configured": true, 00:13:36.986 "data_offset": 0, 00:13:36.986 "data_size": 65536 00:13:36.986 }, 00:13:36.986 { 00:13:36.986 "name": "BaseBdev2", 00:13:36.986 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:36.986 "is_configured": true, 00:13:36.986 "data_offset": 0, 00:13:36.986 "data_size": 65536 00:13:36.986 }, 00:13:36.986 { 00:13:36.986 "name": "BaseBdev3", 00:13:36.986 "uuid": "ba891e7b-4342-48b8-b004-020cbe20f9e9", 00:13:36.986 "is_configured": true, 00:13:36.986 "data_offset": 0, 00:13:36.986 "data_size": 65536 00:13:36.986 }, 00:13:36.986 { 00:13:36.986 "name": "BaseBdev4", 00:13:36.986 "uuid": "b097e23f-31d9-4b96-8ee2-744acb100dde", 00:13:36.986 "is_configured": true, 00:13:36.986 "data_offset": 0, 00:13:36.986 "data_size": 65536 00:13:36.986 } 00:13:36.986 ] 00:13:36.986 }' 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.986 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.552 [2024-11-20 07:10:19.561876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.552 "name": "Existed_Raid", 00:13:37.552 "aliases": [ 00:13:37.552 "66cb89db-028c-44ae-8983-fe2878d170e7" 00:13:37.552 ], 00:13:37.552 "product_name": "Raid Volume", 00:13:37.552 "block_size": 512, 00:13:37.552 "num_blocks": 262144, 00:13:37.552 "uuid": "66cb89db-028c-44ae-8983-fe2878d170e7", 00:13:37.552 "assigned_rate_limits": { 00:13:37.552 "rw_ios_per_sec": 0, 00:13:37.552 "rw_mbytes_per_sec": 0, 00:13:37.552 "r_mbytes_per_sec": 0, 00:13:37.552 "w_mbytes_per_sec": 0 00:13:37.552 }, 00:13:37.552 "claimed": false, 00:13:37.552 "zoned": false, 00:13:37.552 "supported_io_types": { 00:13:37.552 "read": true, 00:13:37.552 "write": true, 00:13:37.552 "unmap": true, 00:13:37.552 "flush": true, 00:13:37.552 "reset": true, 00:13:37.552 "nvme_admin": false, 00:13:37.552 "nvme_io": false, 00:13:37.552 "nvme_io_md": false, 00:13:37.552 "write_zeroes": true, 00:13:37.552 "zcopy": false, 00:13:37.552 "get_zone_info": false, 00:13:37.552 "zone_management": false, 00:13:37.552 "zone_append": false, 00:13:37.552 "compare": false, 00:13:37.552 "compare_and_write": false, 00:13:37.552 "abort": false, 00:13:37.552 "seek_hole": false, 00:13:37.552 "seek_data": false, 00:13:37.552 "copy": false, 00:13:37.552 "nvme_iov_md": false 00:13:37.552 }, 00:13:37.552 "memory_domains": [ 00:13:37.552 { 00:13:37.552 "dma_device_id": "system", 00:13:37.552 "dma_device_type": 1 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.552 "dma_device_type": 2 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "system", 00:13:37.552 "dma_device_type": 1 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.552 "dma_device_type": 2 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "system", 00:13:37.552 "dma_device_type": 1 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.552 "dma_device_type": 2 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "system", 00:13:37.552 "dma_device_type": 1 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.552 "dma_device_type": 2 00:13:37.552 } 00:13:37.552 ], 00:13:37.552 "driver_specific": { 00:13:37.552 "raid": { 00:13:37.552 "uuid": "66cb89db-028c-44ae-8983-fe2878d170e7", 00:13:37.552 "strip_size_kb": 64, 00:13:37.552 "state": "online", 00:13:37.552 "raid_level": "raid0", 00:13:37.552 "superblock": false, 00:13:37.552 "num_base_bdevs": 4, 00:13:37.552 "num_base_bdevs_discovered": 4, 00:13:37.552 "num_base_bdevs_operational": 4, 00:13:37.552 "base_bdevs_list": [ 00:13:37.552 { 00:13:37.552 "name": "BaseBdev1", 00:13:37.552 "uuid": "3149d795-50f5-4c4f-91fe-df5739168fd4", 00:13:37.552 "is_configured": true, 00:13:37.552 "data_offset": 0, 00:13:37.552 "data_size": 65536 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "name": "BaseBdev2", 00:13:37.552 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:37.552 "is_configured": true, 00:13:37.552 "data_offset": 0, 00:13:37.552 "data_size": 65536 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "name": "BaseBdev3", 00:13:37.552 "uuid": "ba891e7b-4342-48b8-b004-020cbe20f9e9", 00:13:37.552 "is_configured": true, 00:13:37.552 "data_offset": 0, 00:13:37.552 "data_size": 65536 00:13:37.552 }, 00:13:37.552 { 00:13:37.552 "name": "BaseBdev4", 00:13:37.552 "uuid": "b097e23f-31d9-4b96-8ee2-744acb100dde", 00:13:37.552 "is_configured": true, 00:13:37.552 "data_offset": 0, 00:13:37.552 "data_size": 65536 00:13:37.552 } 00:13:37.552 ] 00:13:37.552 } 00:13:37.552 } 00:13:37.552 }' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:37.552 BaseBdev2 00:13:37.552 BaseBdev3 00:13:37.552 BaseBdev4' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.552 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.811 [2024-11-20 07:10:19.889061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.811 [2024-11-20 07:10:19.889096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.811 [2024-11-20 07:10:19.889177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:37.811 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.811 "name": "Existed_Raid", 00:13:37.811 "uuid": "66cb89db-028c-44ae-8983-fe2878d170e7", 00:13:37.811 "strip_size_kb": 64, 00:13:37.811 "state": "offline", 00:13:37.811 "raid_level": "raid0", 00:13:37.811 "superblock": false, 00:13:37.811 "num_base_bdevs": 4, 00:13:37.811 "num_base_bdevs_discovered": 3, 00:13:37.811 "num_base_bdevs_operational": 3, 00:13:37.811 "base_bdevs_list": [ 00:13:37.811 { 00:13:37.811 "name": null, 00:13:37.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.811 "is_configured": false, 00:13:37.811 "data_offset": 0, 00:13:37.811 "data_size": 65536 00:13:37.811 }, 00:13:37.811 { 00:13:37.811 "name": "BaseBdev2", 00:13:37.811 "uuid": "daac3523-7d1f-4220-9db9-8784f22b4297", 00:13:37.811 "is_configured": true, 00:13:37.811 "data_offset": 0, 00:13:37.811 "data_size": 65536 00:13:37.811 }, 00:13:37.811 { 00:13:37.811 "name": "BaseBdev3", 00:13:37.811 "uuid": "ba891e7b-4342-48b8-b004-020cbe20f9e9", 00:13:37.811 "is_configured": true, 00:13:37.811 "data_offset": 0, 00:13:37.811 "data_size": 65536 00:13:37.811 }, 00:13:37.811 { 00:13:37.811 "name": "BaseBdev4", 00:13:37.811 "uuid": "b097e23f-31d9-4b96-8ee2-744acb100dde", 00:13:37.811 "is_configured": true, 00:13:37.811 "data_offset": 0, 00:13:37.811 "data_size": 65536 00:13:37.811 } 00:13:37.811 ] 00:13:37.811 }' 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.811 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.386 [2024-11-20 07:10:20.512915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.386 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.643 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.643 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.643 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:38.643 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.643 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.643 [2024-11-20 07:10:20.656586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.644 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.644 [2024-11-20 07:10:20.821944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:38.644 [2024-11-20 07:10:20.822005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.902 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.902 BaseBdev2 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.902 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.902 [ 00:13:38.902 { 00:13:38.902 "name": "BaseBdev2", 00:13:38.902 "aliases": [ 00:13:38.902 "a932164e-a5f4-4cf3-ab3c-110538528d4d" 00:13:38.902 ], 00:13:38.902 "product_name": "Malloc disk", 00:13:38.902 "block_size": 512, 00:13:38.902 "num_blocks": 65536, 00:13:38.902 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:38.902 "assigned_rate_limits": { 00:13:38.902 "rw_ios_per_sec": 0, 00:13:38.902 "rw_mbytes_per_sec": 0, 00:13:38.902 "r_mbytes_per_sec": 0, 00:13:38.902 "w_mbytes_per_sec": 0 00:13:38.902 }, 00:13:38.902 "claimed": false, 00:13:38.902 "zoned": false, 00:13:38.902 "supported_io_types": { 00:13:38.902 "read": true, 00:13:38.902 "write": true, 00:13:38.902 "unmap": true, 00:13:38.902 "flush": true, 00:13:38.902 "reset": true, 00:13:38.902 "nvme_admin": false, 00:13:38.902 "nvme_io": false, 00:13:38.902 "nvme_io_md": false, 00:13:38.902 "write_zeroes": true, 00:13:38.902 "zcopy": true, 00:13:38.903 "get_zone_info": false, 00:13:38.903 "zone_management": false, 00:13:38.903 "zone_append": false, 00:13:38.903 "compare": false, 00:13:38.903 "compare_and_write": false, 00:13:38.903 "abort": true, 00:13:38.903 "seek_hole": false, 00:13:38.903 "seek_data": false, 00:13:38.903 "copy": true, 00:13:38.903 "nvme_iov_md": false 00:13:38.903 }, 00:13:38.903 "memory_domains": [ 00:13:38.903 { 00:13:38.903 "dma_device_id": "system", 00:13:38.903 "dma_device_type": 1 00:13:38.903 }, 00:13:38.903 { 00:13:38.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.903 "dma_device_type": 2 00:13:38.903 } 00:13:38.903 ], 00:13:38.903 "driver_specific": {} 00:13:38.903 } 00:13:38.903 ] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.903 BaseBdev3 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.903 [ 00:13:38.903 { 00:13:38.903 "name": "BaseBdev3", 00:13:38.903 "aliases": [ 00:13:38.903 "53c9911f-64fa-4e50-8333-2949d430ff1e" 00:13:38.903 ], 00:13:38.903 "product_name": "Malloc disk", 00:13:38.903 "block_size": 512, 00:13:38.903 "num_blocks": 65536, 00:13:38.903 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:38.903 "assigned_rate_limits": { 00:13:38.903 "rw_ios_per_sec": 0, 00:13:38.903 "rw_mbytes_per_sec": 0, 00:13:38.903 "r_mbytes_per_sec": 0, 00:13:38.903 "w_mbytes_per_sec": 0 00:13:38.903 }, 00:13:38.903 "claimed": false, 00:13:38.903 "zoned": false, 00:13:38.903 "supported_io_types": { 00:13:38.903 "read": true, 00:13:38.903 "write": true, 00:13:38.903 "unmap": true, 00:13:38.903 "flush": true, 00:13:38.903 "reset": true, 00:13:38.903 "nvme_admin": false, 00:13:38.903 "nvme_io": false, 00:13:38.903 "nvme_io_md": false, 00:13:38.903 "write_zeroes": true, 00:13:38.903 "zcopy": true, 00:13:38.903 "get_zone_info": false, 00:13:38.903 "zone_management": false, 00:13:38.903 "zone_append": false, 00:13:38.903 "compare": false, 00:13:38.903 "compare_and_write": false, 00:13:38.903 "abort": true, 00:13:38.903 "seek_hole": false, 00:13:38.903 "seek_data": false, 00:13:38.903 "copy": true, 00:13:38.903 "nvme_iov_md": false 00:13:38.903 }, 00:13:38.903 "memory_domains": [ 00:13:38.903 { 00:13:38.903 "dma_device_id": "system", 00:13:38.903 "dma_device_type": 1 00:13:38.903 }, 00:13:38.903 { 00:13:38.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.903 "dma_device_type": 2 00:13:38.903 } 00:13:38.903 ], 00:13:38.903 "driver_specific": {} 00:13:38.903 } 00:13:38.903 ] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.903 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.161 BaseBdev4 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.161 [ 00:13:39.161 { 00:13:39.161 "name": "BaseBdev4", 00:13:39.161 "aliases": [ 00:13:39.161 "bc32ecc8-fd80-4288-9870-bfa525f52e6e" 00:13:39.161 ], 00:13:39.161 "product_name": "Malloc disk", 00:13:39.161 "block_size": 512, 00:13:39.161 "num_blocks": 65536, 00:13:39.161 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:39.161 "assigned_rate_limits": { 00:13:39.161 "rw_ios_per_sec": 0, 00:13:39.161 "rw_mbytes_per_sec": 0, 00:13:39.161 "r_mbytes_per_sec": 0, 00:13:39.161 "w_mbytes_per_sec": 0 00:13:39.161 }, 00:13:39.161 "claimed": false, 00:13:39.161 "zoned": false, 00:13:39.161 "supported_io_types": { 00:13:39.161 "read": true, 00:13:39.161 "write": true, 00:13:39.161 "unmap": true, 00:13:39.161 "flush": true, 00:13:39.161 "reset": true, 00:13:39.161 "nvme_admin": false, 00:13:39.161 "nvme_io": false, 00:13:39.161 "nvme_io_md": false, 00:13:39.161 "write_zeroes": true, 00:13:39.161 "zcopy": true, 00:13:39.161 "get_zone_info": false, 00:13:39.161 "zone_management": false, 00:13:39.161 "zone_append": false, 00:13:39.161 "compare": false, 00:13:39.161 "compare_and_write": false, 00:13:39.161 "abort": true, 00:13:39.161 "seek_hole": false, 00:13:39.161 "seek_data": false, 00:13:39.161 "copy": true, 00:13:39.161 "nvme_iov_md": false 00:13:39.161 }, 00:13:39.161 "memory_domains": [ 00:13:39.161 { 00:13:39.161 "dma_device_id": "system", 00:13:39.161 "dma_device_type": 1 00:13:39.161 }, 00:13:39.161 { 00:13:39.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.161 "dma_device_type": 2 00:13:39.161 } 00:13:39.161 ], 00:13:39.161 "driver_specific": {} 00:13:39.161 } 00:13:39.161 ] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.161 [2024-11-20 07:10:21.210279] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.161 [2024-11-20 07:10:21.210329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.161 [2024-11-20 07:10:21.210366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.161 [2024-11-20 07:10:21.212395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:39.161 [2024-11-20 07:10:21.212456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.161 "name": "Existed_Raid", 00:13:39.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.161 "strip_size_kb": 64, 00:13:39.161 "state": "configuring", 00:13:39.161 "raid_level": "raid0", 00:13:39.161 "superblock": false, 00:13:39.161 "num_base_bdevs": 4, 00:13:39.161 "num_base_bdevs_discovered": 3, 00:13:39.161 "num_base_bdevs_operational": 4, 00:13:39.161 "base_bdevs_list": [ 00:13:39.161 { 00:13:39.161 "name": "BaseBdev1", 00:13:39.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.161 "is_configured": false, 00:13:39.161 "data_offset": 0, 00:13:39.161 "data_size": 0 00:13:39.161 }, 00:13:39.161 { 00:13:39.161 "name": "BaseBdev2", 00:13:39.161 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:39.161 "is_configured": true, 00:13:39.161 "data_offset": 0, 00:13:39.161 "data_size": 65536 00:13:39.161 }, 00:13:39.161 { 00:13:39.161 "name": "BaseBdev3", 00:13:39.161 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:39.161 "is_configured": true, 00:13:39.161 "data_offset": 0, 00:13:39.161 "data_size": 65536 00:13:39.161 }, 00:13:39.161 { 00:13:39.161 "name": "BaseBdev4", 00:13:39.161 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:39.161 "is_configured": true, 00:13:39.161 "data_offset": 0, 00:13:39.161 "data_size": 65536 00:13:39.161 } 00:13:39.161 ] 00:13:39.161 }' 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.161 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.420 [2024-11-20 07:10:21.669525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.420 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.678 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.678 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.678 "name": "Existed_Raid", 00:13:39.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.678 "strip_size_kb": 64, 00:13:39.678 "state": "configuring", 00:13:39.678 "raid_level": "raid0", 00:13:39.678 "superblock": false, 00:13:39.678 "num_base_bdevs": 4, 00:13:39.678 "num_base_bdevs_discovered": 2, 00:13:39.678 "num_base_bdevs_operational": 4, 00:13:39.678 "base_bdevs_list": [ 00:13:39.678 { 00:13:39.678 "name": "BaseBdev1", 00:13:39.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.678 "is_configured": false, 00:13:39.678 "data_offset": 0, 00:13:39.678 "data_size": 0 00:13:39.678 }, 00:13:39.678 { 00:13:39.678 "name": null, 00:13:39.678 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:39.678 "is_configured": false, 00:13:39.678 "data_offset": 0, 00:13:39.678 "data_size": 65536 00:13:39.678 }, 00:13:39.678 { 00:13:39.678 "name": "BaseBdev3", 00:13:39.678 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:39.678 "is_configured": true, 00:13:39.678 "data_offset": 0, 00:13:39.678 "data_size": 65536 00:13:39.678 }, 00:13:39.678 { 00:13:39.678 "name": "BaseBdev4", 00:13:39.678 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:39.678 "is_configured": true, 00:13:39.678 "data_offset": 0, 00:13:39.678 "data_size": 65536 00:13:39.678 } 00:13:39.678 ] 00:13:39.678 }' 00:13:39.678 07:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.678 07:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.938 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 [2024-11-20 07:10:22.244844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.196 BaseBdev1 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 [ 00:13:40.196 { 00:13:40.196 "name": "BaseBdev1", 00:13:40.196 "aliases": [ 00:13:40.196 "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb" 00:13:40.196 ], 00:13:40.196 "product_name": "Malloc disk", 00:13:40.196 "block_size": 512, 00:13:40.196 "num_blocks": 65536, 00:13:40.196 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:40.196 "assigned_rate_limits": { 00:13:40.196 "rw_ios_per_sec": 0, 00:13:40.196 "rw_mbytes_per_sec": 0, 00:13:40.196 "r_mbytes_per_sec": 0, 00:13:40.196 "w_mbytes_per_sec": 0 00:13:40.196 }, 00:13:40.196 "claimed": true, 00:13:40.196 "claim_type": "exclusive_write", 00:13:40.196 "zoned": false, 00:13:40.196 "supported_io_types": { 00:13:40.196 "read": true, 00:13:40.196 "write": true, 00:13:40.196 "unmap": true, 00:13:40.196 "flush": true, 00:13:40.196 "reset": true, 00:13:40.196 "nvme_admin": false, 00:13:40.196 "nvme_io": false, 00:13:40.196 "nvme_io_md": false, 00:13:40.196 "write_zeroes": true, 00:13:40.196 "zcopy": true, 00:13:40.196 "get_zone_info": false, 00:13:40.196 "zone_management": false, 00:13:40.196 "zone_append": false, 00:13:40.196 "compare": false, 00:13:40.196 "compare_and_write": false, 00:13:40.196 "abort": true, 00:13:40.196 "seek_hole": false, 00:13:40.196 "seek_data": false, 00:13:40.196 "copy": true, 00:13:40.196 "nvme_iov_md": false 00:13:40.196 }, 00:13:40.196 "memory_domains": [ 00:13:40.196 { 00:13:40.196 "dma_device_id": "system", 00:13:40.196 "dma_device_type": 1 00:13:40.196 }, 00:13:40.196 { 00:13:40.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.196 "dma_device_type": 2 00:13:40.196 } 00:13:40.196 ], 00:13:40.196 "driver_specific": {} 00:13:40.196 } 00:13:40.196 ] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.196 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.196 "name": "Existed_Raid", 00:13:40.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.196 "strip_size_kb": 64, 00:13:40.196 "state": "configuring", 00:13:40.196 "raid_level": "raid0", 00:13:40.196 "superblock": false, 00:13:40.196 "num_base_bdevs": 4, 00:13:40.196 "num_base_bdevs_discovered": 3, 00:13:40.196 "num_base_bdevs_operational": 4, 00:13:40.196 "base_bdevs_list": [ 00:13:40.196 { 00:13:40.196 "name": "BaseBdev1", 00:13:40.196 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:40.196 "is_configured": true, 00:13:40.196 "data_offset": 0, 00:13:40.196 "data_size": 65536 00:13:40.196 }, 00:13:40.196 { 00:13:40.196 "name": null, 00:13:40.197 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:40.197 "is_configured": false, 00:13:40.197 "data_offset": 0, 00:13:40.197 "data_size": 65536 00:13:40.197 }, 00:13:40.197 { 00:13:40.197 "name": "BaseBdev3", 00:13:40.197 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:40.197 "is_configured": true, 00:13:40.197 "data_offset": 0, 00:13:40.197 "data_size": 65536 00:13:40.197 }, 00:13:40.197 { 00:13:40.197 "name": "BaseBdev4", 00:13:40.197 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:40.197 "is_configured": true, 00:13:40.197 "data_offset": 0, 00:13:40.197 "data_size": 65536 00:13:40.197 } 00:13:40.197 ] 00:13:40.197 }' 00:13:40.197 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.197 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.454 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.454 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.454 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.454 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.712 [2024-11-20 07:10:22.756087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.712 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.713 "name": "Existed_Raid", 00:13:40.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.713 "strip_size_kb": 64, 00:13:40.713 "state": "configuring", 00:13:40.713 "raid_level": "raid0", 00:13:40.713 "superblock": false, 00:13:40.713 "num_base_bdevs": 4, 00:13:40.713 "num_base_bdevs_discovered": 2, 00:13:40.713 "num_base_bdevs_operational": 4, 00:13:40.713 "base_bdevs_list": [ 00:13:40.713 { 00:13:40.713 "name": "BaseBdev1", 00:13:40.713 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:40.713 "is_configured": true, 00:13:40.713 "data_offset": 0, 00:13:40.713 "data_size": 65536 00:13:40.713 }, 00:13:40.713 { 00:13:40.713 "name": null, 00:13:40.713 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:40.713 "is_configured": false, 00:13:40.713 "data_offset": 0, 00:13:40.713 "data_size": 65536 00:13:40.713 }, 00:13:40.713 { 00:13:40.713 "name": null, 00:13:40.713 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:40.713 "is_configured": false, 00:13:40.713 "data_offset": 0, 00:13:40.713 "data_size": 65536 00:13:40.713 }, 00:13:40.713 { 00:13:40.713 "name": "BaseBdev4", 00:13:40.713 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:40.713 "is_configured": true, 00:13:40.713 "data_offset": 0, 00:13:40.713 "data_size": 65536 00:13:40.713 } 00:13:40.713 ] 00:13:40.713 }' 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.713 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.971 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.971 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.971 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.971 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.971 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.229 [2024-11-20 07:10:23.243236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.229 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.230 "name": "Existed_Raid", 00:13:41.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.230 "strip_size_kb": 64, 00:13:41.230 "state": "configuring", 00:13:41.230 "raid_level": "raid0", 00:13:41.230 "superblock": false, 00:13:41.230 "num_base_bdevs": 4, 00:13:41.230 "num_base_bdevs_discovered": 3, 00:13:41.230 "num_base_bdevs_operational": 4, 00:13:41.230 "base_bdevs_list": [ 00:13:41.230 { 00:13:41.230 "name": "BaseBdev1", 00:13:41.230 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:41.230 "is_configured": true, 00:13:41.230 "data_offset": 0, 00:13:41.230 "data_size": 65536 00:13:41.230 }, 00:13:41.230 { 00:13:41.230 "name": null, 00:13:41.230 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:41.230 "is_configured": false, 00:13:41.230 "data_offset": 0, 00:13:41.230 "data_size": 65536 00:13:41.230 }, 00:13:41.230 { 00:13:41.230 "name": "BaseBdev3", 00:13:41.230 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:41.230 "is_configured": true, 00:13:41.230 "data_offset": 0, 00:13:41.230 "data_size": 65536 00:13:41.230 }, 00:13:41.230 { 00:13:41.230 "name": "BaseBdev4", 00:13:41.230 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:41.230 "is_configured": true, 00:13:41.230 "data_offset": 0, 00:13:41.230 "data_size": 65536 00:13:41.230 } 00:13:41.230 ] 00:13:41.230 }' 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.230 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.489 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.489 [2024-11-20 07:10:23.750451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.747 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.748 "name": "Existed_Raid", 00:13:41.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.748 "strip_size_kb": 64, 00:13:41.748 "state": "configuring", 00:13:41.748 "raid_level": "raid0", 00:13:41.748 "superblock": false, 00:13:41.748 "num_base_bdevs": 4, 00:13:41.748 "num_base_bdevs_discovered": 2, 00:13:41.748 "num_base_bdevs_operational": 4, 00:13:41.748 "base_bdevs_list": [ 00:13:41.748 { 00:13:41.748 "name": null, 00:13:41.748 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:41.748 "is_configured": false, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": null, 00:13:41.748 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:41.748 "is_configured": false, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": "BaseBdev3", 00:13:41.748 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:41.748 "is_configured": true, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 }, 00:13:41.748 { 00:13:41.748 "name": "BaseBdev4", 00:13:41.748 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:41.748 "is_configured": true, 00:13:41.748 "data_offset": 0, 00:13:41.748 "data_size": 65536 00:13:41.748 } 00:13:41.748 ] 00:13:41.748 }' 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.748 07:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 [2024-11-20 07:10:24.356009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.313 "name": "Existed_Raid", 00:13:42.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.313 "strip_size_kb": 64, 00:13:42.313 "state": "configuring", 00:13:42.313 "raid_level": "raid0", 00:13:42.313 "superblock": false, 00:13:42.313 "num_base_bdevs": 4, 00:13:42.313 "num_base_bdevs_discovered": 3, 00:13:42.313 "num_base_bdevs_operational": 4, 00:13:42.313 "base_bdevs_list": [ 00:13:42.313 { 00:13:42.313 "name": null, 00:13:42.313 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:42.313 "is_configured": false, 00:13:42.313 "data_offset": 0, 00:13:42.313 "data_size": 65536 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": "BaseBdev2", 00:13:42.313 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:42.313 "is_configured": true, 00:13:42.313 "data_offset": 0, 00:13:42.313 "data_size": 65536 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": "BaseBdev3", 00:13:42.313 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:42.313 "is_configured": true, 00:13:42.313 "data_offset": 0, 00:13:42.313 "data_size": 65536 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": "BaseBdev4", 00:13:42.313 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:42.313 "is_configured": true, 00:13:42.313 "data_offset": 0, 00:13:42.313 "data_size": 65536 00:13:42.313 } 00:13:42.314 ] 00:13:42.314 }' 00:13:42.314 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.314 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.571 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.571 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:42.571 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.571 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.571 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d90f4fe1-32fe-44b4-ac3a-b52521dcabfb 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.850 [2024-11-20 07:10:24.933003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:42.850 [2024-11-20 07:10:24.933061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.850 [2024-11-20 07:10:24.933070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:42.850 [2024-11-20 07:10:24.933389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:42.850 [2024-11-20 07:10:24.933599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.850 [2024-11-20 07:10:24.933621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:42.850 [2024-11-20 07:10:24.933893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.850 NewBaseBdev 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.850 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.850 [ 00:13:42.850 { 00:13:42.850 "name": "NewBaseBdev", 00:13:42.850 "aliases": [ 00:13:42.850 "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb" 00:13:42.850 ], 00:13:42.850 "product_name": "Malloc disk", 00:13:42.850 "block_size": 512, 00:13:42.850 "num_blocks": 65536, 00:13:42.850 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:42.850 "assigned_rate_limits": { 00:13:42.850 "rw_ios_per_sec": 0, 00:13:42.850 "rw_mbytes_per_sec": 0, 00:13:42.850 "r_mbytes_per_sec": 0, 00:13:42.850 "w_mbytes_per_sec": 0 00:13:42.850 }, 00:13:42.850 "claimed": true, 00:13:42.850 "claim_type": "exclusive_write", 00:13:42.850 "zoned": false, 00:13:42.850 "supported_io_types": { 00:13:42.850 "read": true, 00:13:42.850 "write": true, 00:13:42.850 "unmap": true, 00:13:42.850 "flush": true, 00:13:42.850 "reset": true, 00:13:42.850 "nvme_admin": false, 00:13:42.850 "nvme_io": false, 00:13:42.850 "nvme_io_md": false, 00:13:42.850 "write_zeroes": true, 00:13:42.850 "zcopy": true, 00:13:42.850 "get_zone_info": false, 00:13:42.850 "zone_management": false, 00:13:42.850 "zone_append": false, 00:13:42.850 "compare": false, 00:13:42.850 "compare_and_write": false, 00:13:42.850 "abort": true, 00:13:42.850 "seek_hole": false, 00:13:42.851 "seek_data": false, 00:13:42.851 "copy": true, 00:13:42.851 "nvme_iov_md": false 00:13:42.851 }, 00:13:42.851 "memory_domains": [ 00:13:42.851 { 00:13:42.851 "dma_device_id": "system", 00:13:42.851 "dma_device_type": 1 00:13:42.851 }, 00:13:42.851 { 00:13:42.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.851 "dma_device_type": 2 00:13:42.851 } 00:13:42.851 ], 00:13:42.851 "driver_specific": {} 00:13:42.851 } 00:13:42.851 ] 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.851 07:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.851 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.851 "name": "Existed_Raid", 00:13:42.851 "uuid": "2dd035bd-5a30-4bf7-b594-ed0982b65e4c", 00:13:42.851 "strip_size_kb": 64, 00:13:42.851 "state": "online", 00:13:42.851 "raid_level": "raid0", 00:13:42.851 "superblock": false, 00:13:42.851 "num_base_bdevs": 4, 00:13:42.851 "num_base_bdevs_discovered": 4, 00:13:42.851 "num_base_bdevs_operational": 4, 00:13:42.851 "base_bdevs_list": [ 00:13:42.851 { 00:13:42.851 "name": "NewBaseBdev", 00:13:42.851 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 0, 00:13:42.851 "data_size": 65536 00:13:42.851 }, 00:13:42.851 { 00:13:42.851 "name": "BaseBdev2", 00:13:42.851 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 0, 00:13:42.851 "data_size": 65536 00:13:42.851 }, 00:13:42.851 { 00:13:42.851 "name": "BaseBdev3", 00:13:42.851 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 0, 00:13:42.851 "data_size": 65536 00:13:42.851 }, 00:13:42.851 { 00:13:42.851 "name": "BaseBdev4", 00:13:42.851 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 0, 00:13:42.851 "data_size": 65536 00:13:42.851 } 00:13:42.851 ] 00:13:42.851 }' 00:13:42.851 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.851 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.417 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.417 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.418 [2024-11-20 07:10:25.468568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.418 "name": "Existed_Raid", 00:13:43.418 "aliases": [ 00:13:43.418 "2dd035bd-5a30-4bf7-b594-ed0982b65e4c" 00:13:43.418 ], 00:13:43.418 "product_name": "Raid Volume", 00:13:43.418 "block_size": 512, 00:13:43.418 "num_blocks": 262144, 00:13:43.418 "uuid": "2dd035bd-5a30-4bf7-b594-ed0982b65e4c", 00:13:43.418 "assigned_rate_limits": { 00:13:43.418 "rw_ios_per_sec": 0, 00:13:43.418 "rw_mbytes_per_sec": 0, 00:13:43.418 "r_mbytes_per_sec": 0, 00:13:43.418 "w_mbytes_per_sec": 0 00:13:43.418 }, 00:13:43.418 "claimed": false, 00:13:43.418 "zoned": false, 00:13:43.418 "supported_io_types": { 00:13:43.418 "read": true, 00:13:43.418 "write": true, 00:13:43.418 "unmap": true, 00:13:43.418 "flush": true, 00:13:43.418 "reset": true, 00:13:43.418 "nvme_admin": false, 00:13:43.418 "nvme_io": false, 00:13:43.418 "nvme_io_md": false, 00:13:43.418 "write_zeroes": true, 00:13:43.418 "zcopy": false, 00:13:43.418 "get_zone_info": false, 00:13:43.418 "zone_management": false, 00:13:43.418 "zone_append": false, 00:13:43.418 "compare": false, 00:13:43.418 "compare_and_write": false, 00:13:43.418 "abort": false, 00:13:43.418 "seek_hole": false, 00:13:43.418 "seek_data": false, 00:13:43.418 "copy": false, 00:13:43.418 "nvme_iov_md": false 00:13:43.418 }, 00:13:43.418 "memory_domains": [ 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "system", 00:13:43.418 "dma_device_type": 1 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.418 "dma_device_type": 2 00:13:43.418 } 00:13:43.418 ], 00:13:43.418 "driver_specific": { 00:13:43.418 "raid": { 00:13:43.418 "uuid": "2dd035bd-5a30-4bf7-b594-ed0982b65e4c", 00:13:43.418 "strip_size_kb": 64, 00:13:43.418 "state": "online", 00:13:43.418 "raid_level": "raid0", 00:13:43.418 "superblock": false, 00:13:43.418 "num_base_bdevs": 4, 00:13:43.418 "num_base_bdevs_discovered": 4, 00:13:43.418 "num_base_bdevs_operational": 4, 00:13:43.418 "base_bdevs_list": [ 00:13:43.418 { 00:13:43.418 "name": "NewBaseBdev", 00:13:43.418 "uuid": "d90f4fe1-32fe-44b4-ac3a-b52521dcabfb", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "name": "BaseBdev2", 00:13:43.418 "uuid": "a932164e-a5f4-4cf3-ab3c-110538528d4d", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "name": "BaseBdev3", 00:13:43.418 "uuid": "53c9911f-64fa-4e50-8333-2949d430ff1e", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 }, 00:13:43.418 { 00:13:43.418 "name": "BaseBdev4", 00:13:43.418 "uuid": "bc32ecc8-fd80-4288-9870-bfa525f52e6e", 00:13:43.418 "is_configured": true, 00:13:43.418 "data_offset": 0, 00:13:43.418 "data_size": 65536 00:13:43.418 } 00:13:43.418 ] 00:13:43.418 } 00:13:43.418 } 00:13:43.418 }' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:43.418 BaseBdev2 00:13:43.418 BaseBdev3 00:13:43.418 BaseBdev4' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.418 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.677 [2024-11-20 07:10:25.803580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.677 [2024-11-20 07:10:25.803614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.677 [2024-11-20 07:10:25.803702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.677 [2024-11-20 07:10:25.803768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.677 [2024-11-20 07:10:25.803780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69688 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69688 ']' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69688 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69688 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.677 killing process with pid 69688 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69688' 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69688 00:13:43.677 [2024-11-20 07:10:25.851556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.677 07:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69688 00:13:44.246 [2024-11-20 07:10:26.288195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:45.651 00:13:45.651 real 0m11.956s 00:13:45.651 user 0m18.969s 00:13:45.651 sys 0m2.100s 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.651 ************************************ 00:13:45.651 END TEST raid_state_function_test 00:13:45.651 ************************************ 00:13:45.651 07:10:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:45.651 07:10:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:45.651 07:10:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.651 07:10:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.651 ************************************ 00:13:45.651 START TEST raid_state_function_test_sb 00:13:45.651 ************************************ 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70370 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:45.651 Process raid pid: 70370 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70370' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70370 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70370 ']' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.651 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.651 [2024-11-20 07:10:27.634498] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:45.651 [2024-11-20 07:10:27.634635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.651 [2024-11-20 07:10:27.792133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.909 [2024-11-20 07:10:27.919122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.909 [2024-11-20 07:10:28.145798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.909 [2024-11-20 07:10:28.145852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 [2024-11-20 07:10:28.501249] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.474 [2024-11-20 07:10:28.501304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.474 [2024-11-20 07:10:28.501316] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.474 [2024-11-20 07:10:28.501327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.474 [2024-11-20 07:10:28.501345] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.474 [2024-11-20 07:10:28.501356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.474 [2024-11-20 07:10:28.501363] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.474 [2024-11-20 07:10:28.501373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.474 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.475 "name": "Existed_Raid", 00:13:46.475 "uuid": "dc1e0368-777e-4033-9c9c-64145e20b06a", 00:13:46.475 "strip_size_kb": 64, 00:13:46.475 "state": "configuring", 00:13:46.475 "raid_level": "raid0", 00:13:46.475 "superblock": true, 00:13:46.475 "num_base_bdevs": 4, 00:13:46.475 "num_base_bdevs_discovered": 0, 00:13:46.475 "num_base_bdevs_operational": 4, 00:13:46.475 "base_bdevs_list": [ 00:13:46.475 { 00:13:46.475 "name": "BaseBdev1", 00:13:46.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.475 "is_configured": false, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 0 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev2", 00:13:46.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.475 "is_configured": false, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 0 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev3", 00:13:46.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.475 "is_configured": false, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 0 00:13:46.475 }, 00:13:46.475 { 00:13:46.475 "name": "BaseBdev4", 00:13:46.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.475 "is_configured": false, 00:13:46.475 "data_offset": 0, 00:13:46.475 "data_size": 0 00:13:46.475 } 00:13:46.475 ] 00:13:46.475 }' 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.475 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 [2024-11-20 07:10:28.952542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.733 [2024-11-20 07:10:28.952621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 [2024-11-20 07:10:28.964516] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.733 [2024-11-20 07:10:28.964588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.733 [2024-11-20 07:10:28.964603] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.733 [2024-11-20 07:10:28.964618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.733 [2024-11-20 07:10:28.964629] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.733 [2024-11-20 07:10:28.964644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.733 [2024-11-20 07:10:28.964655] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.733 [2024-11-20 07:10:28.964670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.733 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.993 [2024-11-20 07:10:29.027217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.993 BaseBdev1 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.993 [ 00:13:46.993 { 00:13:46.993 "name": "BaseBdev1", 00:13:46.993 "aliases": [ 00:13:46.993 "1b658344-acb2-4525-b684-532a3ab6d160" 00:13:46.993 ], 00:13:46.993 "product_name": "Malloc disk", 00:13:46.993 "block_size": 512, 00:13:46.993 "num_blocks": 65536, 00:13:46.993 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:46.993 "assigned_rate_limits": { 00:13:46.993 "rw_ios_per_sec": 0, 00:13:46.993 "rw_mbytes_per_sec": 0, 00:13:46.993 "r_mbytes_per_sec": 0, 00:13:46.993 "w_mbytes_per_sec": 0 00:13:46.993 }, 00:13:46.993 "claimed": true, 00:13:46.993 "claim_type": "exclusive_write", 00:13:46.993 "zoned": false, 00:13:46.993 "supported_io_types": { 00:13:46.993 "read": true, 00:13:46.993 "write": true, 00:13:46.993 "unmap": true, 00:13:46.993 "flush": true, 00:13:46.993 "reset": true, 00:13:46.993 "nvme_admin": false, 00:13:46.993 "nvme_io": false, 00:13:46.993 "nvme_io_md": false, 00:13:46.993 "write_zeroes": true, 00:13:46.993 "zcopy": true, 00:13:46.993 "get_zone_info": false, 00:13:46.993 "zone_management": false, 00:13:46.993 "zone_append": false, 00:13:46.993 "compare": false, 00:13:46.993 "compare_and_write": false, 00:13:46.993 "abort": true, 00:13:46.993 "seek_hole": false, 00:13:46.993 "seek_data": false, 00:13:46.993 "copy": true, 00:13:46.993 "nvme_iov_md": false 00:13:46.993 }, 00:13:46.993 "memory_domains": [ 00:13:46.993 { 00:13:46.993 "dma_device_id": "system", 00:13:46.993 "dma_device_type": 1 00:13:46.993 }, 00:13:46.993 { 00:13:46.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.993 "dma_device_type": 2 00:13:46.993 } 00:13:46.993 ], 00:13:46.993 "driver_specific": {} 00:13:46.993 } 00:13:46.993 ] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.993 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.993 "name": "Existed_Raid", 00:13:46.993 "uuid": "71d65a10-8201-4912-abee-1e2f8541eb60", 00:13:46.993 "strip_size_kb": 64, 00:13:46.993 "state": "configuring", 00:13:46.993 "raid_level": "raid0", 00:13:46.993 "superblock": true, 00:13:46.993 "num_base_bdevs": 4, 00:13:46.993 "num_base_bdevs_discovered": 1, 00:13:46.993 "num_base_bdevs_operational": 4, 00:13:46.993 "base_bdevs_list": [ 00:13:46.993 { 00:13:46.993 "name": "BaseBdev1", 00:13:46.993 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:46.993 "is_configured": true, 00:13:46.993 "data_offset": 2048, 00:13:46.993 "data_size": 63488 00:13:46.993 }, 00:13:46.993 { 00:13:46.993 "name": "BaseBdev2", 00:13:46.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.993 "is_configured": false, 00:13:46.993 "data_offset": 0, 00:13:46.993 "data_size": 0 00:13:46.993 }, 00:13:46.993 { 00:13:46.993 "name": "BaseBdev3", 00:13:46.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.993 "is_configured": false, 00:13:46.993 "data_offset": 0, 00:13:46.993 "data_size": 0 00:13:46.993 }, 00:13:46.993 { 00:13:46.993 "name": "BaseBdev4", 00:13:46.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.993 "is_configured": false, 00:13:46.993 "data_offset": 0, 00:13:46.993 "data_size": 0 00:13:46.993 } 00:13:46.993 ] 00:13:46.994 }' 00:13:46.994 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.994 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.252 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.252 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.253 [2024-11-20 07:10:29.470578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.253 [2024-11-20 07:10:29.470683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.253 [2024-11-20 07:10:29.478639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.253 [2024-11-20 07:10:29.481090] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.253 [2024-11-20 07:10:29.481157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.253 [2024-11-20 07:10:29.481172] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.253 [2024-11-20 07:10:29.481187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.253 [2024-11-20 07:10:29.481197] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.253 [2024-11-20 07:10:29.481210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.253 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.511 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.511 "name": "Existed_Raid", 00:13:47.511 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:47.511 "strip_size_kb": 64, 00:13:47.511 "state": "configuring", 00:13:47.511 "raid_level": "raid0", 00:13:47.511 "superblock": true, 00:13:47.511 "num_base_bdevs": 4, 00:13:47.511 "num_base_bdevs_discovered": 1, 00:13:47.511 "num_base_bdevs_operational": 4, 00:13:47.511 "base_bdevs_list": [ 00:13:47.511 { 00:13:47.511 "name": "BaseBdev1", 00:13:47.511 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:47.511 "is_configured": true, 00:13:47.511 "data_offset": 2048, 00:13:47.511 "data_size": 63488 00:13:47.511 }, 00:13:47.511 { 00:13:47.511 "name": "BaseBdev2", 00:13:47.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.511 "is_configured": false, 00:13:47.511 "data_offset": 0, 00:13:47.511 "data_size": 0 00:13:47.511 }, 00:13:47.511 { 00:13:47.511 "name": "BaseBdev3", 00:13:47.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.511 "is_configured": false, 00:13:47.511 "data_offset": 0, 00:13:47.511 "data_size": 0 00:13:47.511 }, 00:13:47.511 { 00:13:47.511 "name": "BaseBdev4", 00:13:47.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.511 "is_configured": false, 00:13:47.511 "data_offset": 0, 00:13:47.511 "data_size": 0 00:13:47.511 } 00:13:47.511 ] 00:13:47.511 }' 00:13:47.511 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.511 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 [2024-11-20 07:10:29.952011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.771 BaseBdev2 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 [ 00:13:47.771 { 00:13:47.771 "name": "BaseBdev2", 00:13:47.771 "aliases": [ 00:13:47.771 "fe76f285-2963-478f-9880-e517ee5e1738" 00:13:47.771 ], 00:13:47.771 "product_name": "Malloc disk", 00:13:47.771 "block_size": 512, 00:13:47.771 "num_blocks": 65536, 00:13:47.771 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:47.771 "assigned_rate_limits": { 00:13:47.771 "rw_ios_per_sec": 0, 00:13:47.771 "rw_mbytes_per_sec": 0, 00:13:47.771 "r_mbytes_per_sec": 0, 00:13:47.771 "w_mbytes_per_sec": 0 00:13:47.771 }, 00:13:47.771 "claimed": true, 00:13:47.771 "claim_type": "exclusive_write", 00:13:47.771 "zoned": false, 00:13:47.771 "supported_io_types": { 00:13:47.771 "read": true, 00:13:47.771 "write": true, 00:13:47.771 "unmap": true, 00:13:47.771 "flush": true, 00:13:47.771 "reset": true, 00:13:47.771 "nvme_admin": false, 00:13:47.771 "nvme_io": false, 00:13:47.771 "nvme_io_md": false, 00:13:47.771 "write_zeroes": true, 00:13:47.771 "zcopy": true, 00:13:47.771 "get_zone_info": false, 00:13:47.771 "zone_management": false, 00:13:47.771 "zone_append": false, 00:13:47.771 "compare": false, 00:13:47.771 "compare_and_write": false, 00:13:47.771 "abort": true, 00:13:47.771 "seek_hole": false, 00:13:47.771 "seek_data": false, 00:13:47.771 "copy": true, 00:13:47.771 "nvme_iov_md": false 00:13:47.771 }, 00:13:47.771 "memory_domains": [ 00:13:47.771 { 00:13:47.771 "dma_device_id": "system", 00:13:47.771 "dma_device_type": 1 00:13:47.771 }, 00:13:47.771 { 00:13:47.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.771 "dma_device_type": 2 00:13:47.771 } 00:13:47.771 ], 00:13:47.771 "driver_specific": {} 00:13:47.771 } 00:13:47.771 ] 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.771 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.772 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.772 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.772 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.772 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.030 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.030 "name": "Existed_Raid", 00:13:48.030 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:48.031 "strip_size_kb": 64, 00:13:48.031 "state": "configuring", 00:13:48.031 "raid_level": "raid0", 00:13:48.031 "superblock": true, 00:13:48.031 "num_base_bdevs": 4, 00:13:48.031 "num_base_bdevs_discovered": 2, 00:13:48.031 "num_base_bdevs_operational": 4, 00:13:48.031 "base_bdevs_list": [ 00:13:48.031 { 00:13:48.031 "name": "BaseBdev1", 00:13:48.031 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:48.031 "is_configured": true, 00:13:48.031 "data_offset": 2048, 00:13:48.031 "data_size": 63488 00:13:48.031 }, 00:13:48.031 { 00:13:48.031 "name": "BaseBdev2", 00:13:48.031 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:48.031 "is_configured": true, 00:13:48.031 "data_offset": 2048, 00:13:48.031 "data_size": 63488 00:13:48.031 }, 00:13:48.031 { 00:13:48.031 "name": "BaseBdev3", 00:13:48.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.031 "is_configured": false, 00:13:48.031 "data_offset": 0, 00:13:48.031 "data_size": 0 00:13:48.031 }, 00:13:48.031 { 00:13:48.031 "name": "BaseBdev4", 00:13:48.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.031 "is_configured": false, 00:13:48.031 "data_offset": 0, 00:13:48.031 "data_size": 0 00:13:48.031 } 00:13:48.031 ] 00:13:48.031 }' 00:13:48.031 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.031 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 [2024-11-20 07:10:30.484585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.291 BaseBdev3 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 [ 00:13:48.291 { 00:13:48.291 "name": "BaseBdev3", 00:13:48.291 "aliases": [ 00:13:48.291 "dd17712f-dd60-4941-a28a-c485e40f93a3" 00:13:48.291 ], 00:13:48.291 "product_name": "Malloc disk", 00:13:48.291 "block_size": 512, 00:13:48.291 "num_blocks": 65536, 00:13:48.291 "uuid": "dd17712f-dd60-4941-a28a-c485e40f93a3", 00:13:48.291 "assigned_rate_limits": { 00:13:48.291 "rw_ios_per_sec": 0, 00:13:48.291 "rw_mbytes_per_sec": 0, 00:13:48.291 "r_mbytes_per_sec": 0, 00:13:48.291 "w_mbytes_per_sec": 0 00:13:48.291 }, 00:13:48.291 "claimed": true, 00:13:48.291 "claim_type": "exclusive_write", 00:13:48.291 "zoned": false, 00:13:48.291 "supported_io_types": { 00:13:48.291 "read": true, 00:13:48.291 "write": true, 00:13:48.291 "unmap": true, 00:13:48.291 "flush": true, 00:13:48.291 "reset": true, 00:13:48.291 "nvme_admin": false, 00:13:48.291 "nvme_io": false, 00:13:48.291 "nvme_io_md": false, 00:13:48.291 "write_zeroes": true, 00:13:48.291 "zcopy": true, 00:13:48.291 "get_zone_info": false, 00:13:48.291 "zone_management": false, 00:13:48.291 "zone_append": false, 00:13:48.291 "compare": false, 00:13:48.291 "compare_and_write": false, 00:13:48.291 "abort": true, 00:13:48.291 "seek_hole": false, 00:13:48.291 "seek_data": false, 00:13:48.291 "copy": true, 00:13:48.291 "nvme_iov_md": false 00:13:48.291 }, 00:13:48.291 "memory_domains": [ 00:13:48.291 { 00:13:48.291 "dma_device_id": "system", 00:13:48.291 "dma_device_type": 1 00:13:48.291 }, 00:13:48.291 { 00:13:48.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.291 "dma_device_type": 2 00:13:48.291 } 00:13:48.291 ], 00:13:48.291 "driver_specific": {} 00:13:48.291 } 00:13:48.291 ] 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.551 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.551 "name": "Existed_Raid", 00:13:48.551 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:48.551 "strip_size_kb": 64, 00:13:48.551 "state": "configuring", 00:13:48.551 "raid_level": "raid0", 00:13:48.551 "superblock": true, 00:13:48.551 "num_base_bdevs": 4, 00:13:48.551 "num_base_bdevs_discovered": 3, 00:13:48.551 "num_base_bdevs_operational": 4, 00:13:48.551 "base_bdevs_list": [ 00:13:48.551 { 00:13:48.551 "name": "BaseBdev1", 00:13:48.551 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:48.551 "is_configured": true, 00:13:48.551 "data_offset": 2048, 00:13:48.551 "data_size": 63488 00:13:48.551 }, 00:13:48.551 { 00:13:48.551 "name": "BaseBdev2", 00:13:48.551 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:48.551 "is_configured": true, 00:13:48.551 "data_offset": 2048, 00:13:48.551 "data_size": 63488 00:13:48.551 }, 00:13:48.551 { 00:13:48.551 "name": "BaseBdev3", 00:13:48.551 "uuid": "dd17712f-dd60-4941-a28a-c485e40f93a3", 00:13:48.551 "is_configured": true, 00:13:48.551 "data_offset": 2048, 00:13:48.551 "data_size": 63488 00:13:48.551 }, 00:13:48.551 { 00:13:48.551 "name": "BaseBdev4", 00:13:48.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.551 "is_configured": false, 00:13:48.551 "data_offset": 0, 00:13:48.551 "data_size": 0 00:13:48.551 } 00:13:48.551 ] 00:13:48.551 }' 00:13:48.551 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.551 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.811 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:48.811 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.811 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.811 [2024-11-20 07:10:31.039752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.811 [2024-11-20 07:10:31.040054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:48.811 [2024-11-20 07:10:31.040070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.811 [2024-11-20 07:10:31.040393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:48.811 [2024-11-20 07:10:31.040577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:48.811 [2024-11-20 07:10:31.040599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:48.811 [2024-11-20 07:10:31.040776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.811 BaseBdev4 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.811 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.811 [ 00:13:48.811 { 00:13:48.811 "name": "BaseBdev4", 00:13:48.811 "aliases": [ 00:13:48.811 "6e282703-7358-405d-912a-9a7d42c07d68" 00:13:48.811 ], 00:13:48.811 "product_name": "Malloc disk", 00:13:48.811 "block_size": 512, 00:13:48.811 "num_blocks": 65536, 00:13:48.811 "uuid": "6e282703-7358-405d-912a-9a7d42c07d68", 00:13:48.811 "assigned_rate_limits": { 00:13:48.811 "rw_ios_per_sec": 0, 00:13:48.811 "rw_mbytes_per_sec": 0, 00:13:48.811 "r_mbytes_per_sec": 0, 00:13:48.811 "w_mbytes_per_sec": 0 00:13:48.811 }, 00:13:48.811 "claimed": true, 00:13:48.811 "claim_type": "exclusive_write", 00:13:48.811 "zoned": false, 00:13:48.811 "supported_io_types": { 00:13:48.811 "read": true, 00:13:48.811 "write": true, 00:13:48.811 "unmap": true, 00:13:48.811 "flush": true, 00:13:48.811 "reset": true, 00:13:48.811 "nvme_admin": false, 00:13:48.811 "nvme_io": false, 00:13:48.811 "nvme_io_md": false, 00:13:48.811 "write_zeroes": true, 00:13:48.811 "zcopy": true, 00:13:48.811 "get_zone_info": false, 00:13:48.811 "zone_management": false, 00:13:48.811 "zone_append": false, 00:13:48.811 "compare": false, 00:13:48.811 "compare_and_write": false, 00:13:48.811 "abort": true, 00:13:48.811 "seek_hole": false, 00:13:48.811 "seek_data": false, 00:13:48.811 "copy": true, 00:13:48.811 "nvme_iov_md": false 00:13:48.811 }, 00:13:48.811 "memory_domains": [ 00:13:48.811 { 00:13:48.811 "dma_device_id": "system", 00:13:48.811 "dma_device_type": 1 00:13:48.811 }, 00:13:48.811 { 00:13:48.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.811 "dma_device_type": 2 00:13:48.811 } 00:13:48.811 ], 00:13:48.811 "driver_specific": {} 00:13:49.070 } 00:13:49.070 ] 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.070 "name": "Existed_Raid", 00:13:49.070 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:49.070 "strip_size_kb": 64, 00:13:49.070 "state": "online", 00:13:49.070 "raid_level": "raid0", 00:13:49.070 "superblock": true, 00:13:49.070 "num_base_bdevs": 4, 00:13:49.070 "num_base_bdevs_discovered": 4, 00:13:49.070 "num_base_bdevs_operational": 4, 00:13:49.070 "base_bdevs_list": [ 00:13:49.070 { 00:13:49.070 "name": "BaseBdev1", 00:13:49.070 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:49.070 "is_configured": true, 00:13:49.070 "data_offset": 2048, 00:13:49.070 "data_size": 63488 00:13:49.070 }, 00:13:49.070 { 00:13:49.070 "name": "BaseBdev2", 00:13:49.070 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:49.070 "is_configured": true, 00:13:49.070 "data_offset": 2048, 00:13:49.070 "data_size": 63488 00:13:49.070 }, 00:13:49.070 { 00:13:49.070 "name": "BaseBdev3", 00:13:49.070 "uuid": "dd17712f-dd60-4941-a28a-c485e40f93a3", 00:13:49.070 "is_configured": true, 00:13:49.070 "data_offset": 2048, 00:13:49.070 "data_size": 63488 00:13:49.070 }, 00:13:49.070 { 00:13:49.070 "name": "BaseBdev4", 00:13:49.070 "uuid": "6e282703-7358-405d-912a-9a7d42c07d68", 00:13:49.070 "is_configured": true, 00:13:49.070 "data_offset": 2048, 00:13:49.070 "data_size": 63488 00:13:49.070 } 00:13:49.070 ] 00:13:49.070 }' 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.070 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.328 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.328 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.328 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.328 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.328 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.329 [2024-11-20 07:10:31.555375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.329 "name": "Existed_Raid", 00:13:49.329 "aliases": [ 00:13:49.329 "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee" 00:13:49.329 ], 00:13:49.329 "product_name": "Raid Volume", 00:13:49.329 "block_size": 512, 00:13:49.329 "num_blocks": 253952, 00:13:49.329 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:49.329 "assigned_rate_limits": { 00:13:49.329 "rw_ios_per_sec": 0, 00:13:49.329 "rw_mbytes_per_sec": 0, 00:13:49.329 "r_mbytes_per_sec": 0, 00:13:49.329 "w_mbytes_per_sec": 0 00:13:49.329 }, 00:13:49.329 "claimed": false, 00:13:49.329 "zoned": false, 00:13:49.329 "supported_io_types": { 00:13:49.329 "read": true, 00:13:49.329 "write": true, 00:13:49.329 "unmap": true, 00:13:49.329 "flush": true, 00:13:49.329 "reset": true, 00:13:49.329 "nvme_admin": false, 00:13:49.329 "nvme_io": false, 00:13:49.329 "nvme_io_md": false, 00:13:49.329 "write_zeroes": true, 00:13:49.329 "zcopy": false, 00:13:49.329 "get_zone_info": false, 00:13:49.329 "zone_management": false, 00:13:49.329 "zone_append": false, 00:13:49.329 "compare": false, 00:13:49.329 "compare_and_write": false, 00:13:49.329 "abort": false, 00:13:49.329 "seek_hole": false, 00:13:49.329 "seek_data": false, 00:13:49.329 "copy": false, 00:13:49.329 "nvme_iov_md": false 00:13:49.329 }, 00:13:49.329 "memory_domains": [ 00:13:49.329 { 00:13:49.329 "dma_device_id": "system", 00:13:49.329 "dma_device_type": 1 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.329 "dma_device_type": 2 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "system", 00:13:49.329 "dma_device_type": 1 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.329 "dma_device_type": 2 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "system", 00:13:49.329 "dma_device_type": 1 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.329 "dma_device_type": 2 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "system", 00:13:49.329 "dma_device_type": 1 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.329 "dma_device_type": 2 00:13:49.329 } 00:13:49.329 ], 00:13:49.329 "driver_specific": { 00:13:49.329 "raid": { 00:13:49.329 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:49.329 "strip_size_kb": 64, 00:13:49.329 "state": "online", 00:13:49.329 "raid_level": "raid0", 00:13:49.329 "superblock": true, 00:13:49.329 "num_base_bdevs": 4, 00:13:49.329 "num_base_bdevs_discovered": 4, 00:13:49.329 "num_base_bdevs_operational": 4, 00:13:49.329 "base_bdevs_list": [ 00:13:49.329 { 00:13:49.329 "name": "BaseBdev1", 00:13:49.329 "uuid": "1b658344-acb2-4525-b684-532a3ab6d160", 00:13:49.329 "is_configured": true, 00:13:49.329 "data_offset": 2048, 00:13:49.329 "data_size": 63488 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "name": "BaseBdev2", 00:13:49.329 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:49.329 "is_configured": true, 00:13:49.329 "data_offset": 2048, 00:13:49.329 "data_size": 63488 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "name": "BaseBdev3", 00:13:49.329 "uuid": "dd17712f-dd60-4941-a28a-c485e40f93a3", 00:13:49.329 "is_configured": true, 00:13:49.329 "data_offset": 2048, 00:13:49.329 "data_size": 63488 00:13:49.329 }, 00:13:49.329 { 00:13:49.329 "name": "BaseBdev4", 00:13:49.329 "uuid": "6e282703-7358-405d-912a-9a7d42c07d68", 00:13:49.329 "is_configured": true, 00:13:49.329 "data_offset": 2048, 00:13:49.329 "data_size": 63488 00:13:49.329 } 00:13:49.329 ] 00:13:49.329 } 00:13:49.329 } 00:13:49.329 }' 00:13:49.329 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:49.588 BaseBdev2 00:13:49.588 BaseBdev3 00:13:49.588 BaseBdev4' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.588 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 [2024-11-20 07:10:31.894520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.847 [2024-11-20 07:10:31.894555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.847 [2024-11-20 07:10:31.894620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.847 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.847 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.847 "name": "Existed_Raid", 00:13:49.847 "uuid": "2ee3b52f-3a07-48a9-b3b8-9b7b626a87ee", 00:13:49.847 "strip_size_kb": 64, 00:13:49.847 "state": "offline", 00:13:49.847 "raid_level": "raid0", 00:13:49.848 "superblock": true, 00:13:49.848 "num_base_bdevs": 4, 00:13:49.848 "num_base_bdevs_discovered": 3, 00:13:49.848 "num_base_bdevs_operational": 3, 00:13:49.848 "base_bdevs_list": [ 00:13:49.848 { 00:13:49.848 "name": null, 00:13:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.848 "is_configured": false, 00:13:49.848 "data_offset": 0, 00:13:49.848 "data_size": 63488 00:13:49.848 }, 00:13:49.848 { 00:13:49.848 "name": "BaseBdev2", 00:13:49.848 "uuid": "fe76f285-2963-478f-9880-e517ee5e1738", 00:13:49.848 "is_configured": true, 00:13:49.848 "data_offset": 2048, 00:13:49.848 "data_size": 63488 00:13:49.848 }, 00:13:49.848 { 00:13:49.848 "name": "BaseBdev3", 00:13:49.848 "uuid": "dd17712f-dd60-4941-a28a-c485e40f93a3", 00:13:49.848 "is_configured": true, 00:13:49.848 "data_offset": 2048, 00:13:49.848 "data_size": 63488 00:13:49.848 }, 00:13:49.848 { 00:13:49.848 "name": "BaseBdev4", 00:13:49.848 "uuid": "6e282703-7358-405d-912a-9a7d42c07d68", 00:13:49.848 "is_configured": true, 00:13:49.848 "data_offset": 2048, 00:13:49.848 "data_size": 63488 00:13:49.848 } 00:13:49.848 ] 00:13:49.848 }' 00:13:49.848 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.848 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.415 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:50.415 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.416 [2024-11-20 07:10:32.524087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.416 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 [2024-11-20 07:10:32.692699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.675 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 [2024-11-20 07:10:32.846375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:50.675 [2024-11-20 07:10:32.846441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:50.934 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:50.934 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 BaseBdev2 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [ 00:13:50.935 { 00:13:50.935 "name": "BaseBdev2", 00:13:50.935 "aliases": [ 00:13:50.935 "2777025b-2a68-4b61-9c14-73ee94727cbf" 00:13:50.935 ], 00:13:50.935 "product_name": "Malloc disk", 00:13:50.935 "block_size": 512, 00:13:50.935 "num_blocks": 65536, 00:13:50.935 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:50.935 "assigned_rate_limits": { 00:13:50.935 "rw_ios_per_sec": 0, 00:13:50.935 "rw_mbytes_per_sec": 0, 00:13:50.935 "r_mbytes_per_sec": 0, 00:13:50.935 "w_mbytes_per_sec": 0 00:13:50.935 }, 00:13:50.935 "claimed": false, 00:13:50.935 "zoned": false, 00:13:50.935 "supported_io_types": { 00:13:50.935 "read": true, 00:13:50.935 "write": true, 00:13:50.935 "unmap": true, 00:13:50.935 "flush": true, 00:13:50.935 "reset": true, 00:13:50.935 "nvme_admin": false, 00:13:50.935 "nvme_io": false, 00:13:50.935 "nvme_io_md": false, 00:13:50.935 "write_zeroes": true, 00:13:50.935 "zcopy": true, 00:13:50.935 "get_zone_info": false, 00:13:50.935 "zone_management": false, 00:13:50.935 "zone_append": false, 00:13:50.935 "compare": false, 00:13:50.935 "compare_and_write": false, 00:13:50.935 "abort": true, 00:13:50.935 "seek_hole": false, 00:13:50.935 "seek_data": false, 00:13:50.935 "copy": true, 00:13:50.935 "nvme_iov_md": false 00:13:50.935 }, 00:13:50.935 "memory_domains": [ 00:13:50.935 { 00:13:50.935 "dma_device_id": "system", 00:13:50.935 "dma_device_type": 1 00:13:50.935 }, 00:13:50.935 { 00:13:50.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.935 "dma_device_type": 2 00:13:50.935 } 00:13:50.935 ], 00:13:50.935 "driver_specific": {} 00:13:50.935 } 00:13:50.935 ] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 BaseBdev3 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [ 00:13:50.935 { 00:13:50.935 "name": "BaseBdev3", 00:13:50.935 "aliases": [ 00:13:50.935 "6256d322-b3f7-4c50-b7eb-c3663f9adfc8" 00:13:50.935 ], 00:13:50.935 "product_name": "Malloc disk", 00:13:50.935 "block_size": 512, 00:13:50.935 "num_blocks": 65536, 00:13:50.935 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:50.935 "assigned_rate_limits": { 00:13:50.935 "rw_ios_per_sec": 0, 00:13:50.935 "rw_mbytes_per_sec": 0, 00:13:50.935 "r_mbytes_per_sec": 0, 00:13:50.935 "w_mbytes_per_sec": 0 00:13:50.935 }, 00:13:50.935 "claimed": false, 00:13:50.935 "zoned": false, 00:13:50.935 "supported_io_types": { 00:13:50.935 "read": true, 00:13:50.935 "write": true, 00:13:50.935 "unmap": true, 00:13:50.935 "flush": true, 00:13:50.935 "reset": true, 00:13:50.935 "nvme_admin": false, 00:13:50.935 "nvme_io": false, 00:13:50.935 "nvme_io_md": false, 00:13:50.935 "write_zeroes": true, 00:13:50.935 "zcopy": true, 00:13:50.935 "get_zone_info": false, 00:13:50.935 "zone_management": false, 00:13:50.935 "zone_append": false, 00:13:50.935 "compare": false, 00:13:50.935 "compare_and_write": false, 00:13:50.935 "abort": true, 00:13:50.935 "seek_hole": false, 00:13:50.935 "seek_data": false, 00:13:50.935 "copy": true, 00:13:50.935 "nvme_iov_md": false 00:13:50.935 }, 00:13:50.935 "memory_domains": [ 00:13:50.935 { 00:13:50.935 "dma_device_id": "system", 00:13:50.935 "dma_device_type": 1 00:13:50.935 }, 00:13:50.935 { 00:13:50.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.935 "dma_device_type": 2 00:13:50.935 } 00:13:50.935 ], 00:13:50.935 "driver_specific": {} 00:13:50.935 } 00:13:50.935 ] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.195 BaseBdev4 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.195 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.195 [ 00:13:51.195 { 00:13:51.195 "name": "BaseBdev4", 00:13:51.195 "aliases": [ 00:13:51.195 "2fe8ba97-087c-4b9e-89c4-a626efab8daa" 00:13:51.195 ], 00:13:51.195 "product_name": "Malloc disk", 00:13:51.195 "block_size": 512, 00:13:51.195 "num_blocks": 65536, 00:13:51.195 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:51.195 "assigned_rate_limits": { 00:13:51.195 "rw_ios_per_sec": 0, 00:13:51.195 "rw_mbytes_per_sec": 0, 00:13:51.195 "r_mbytes_per_sec": 0, 00:13:51.195 "w_mbytes_per_sec": 0 00:13:51.195 }, 00:13:51.195 "claimed": false, 00:13:51.195 "zoned": false, 00:13:51.195 "supported_io_types": { 00:13:51.195 "read": true, 00:13:51.195 "write": true, 00:13:51.195 "unmap": true, 00:13:51.195 "flush": true, 00:13:51.195 "reset": true, 00:13:51.195 "nvme_admin": false, 00:13:51.195 "nvme_io": false, 00:13:51.195 "nvme_io_md": false, 00:13:51.195 "write_zeroes": true, 00:13:51.195 "zcopy": true, 00:13:51.195 "get_zone_info": false, 00:13:51.195 "zone_management": false, 00:13:51.195 "zone_append": false, 00:13:51.195 "compare": false, 00:13:51.195 "compare_and_write": false, 00:13:51.195 "abort": true, 00:13:51.195 "seek_hole": false, 00:13:51.195 "seek_data": false, 00:13:51.195 "copy": true, 00:13:51.195 "nvme_iov_md": false 00:13:51.195 }, 00:13:51.195 "memory_domains": [ 00:13:51.195 { 00:13:51.195 "dma_device_id": "system", 00:13:51.195 "dma_device_type": 1 00:13:51.195 }, 00:13:51.195 { 00:13:51.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.195 "dma_device_type": 2 00:13:51.195 } 00:13:51.195 ], 00:13:51.196 "driver_specific": {} 00:13:51.196 } 00:13:51.196 ] 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.196 [2024-11-20 07:10:33.240837] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.196 [2024-11-20 07:10:33.240964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.196 [2024-11-20 07:10:33.241034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.196 [2024-11-20 07:10:33.243545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.196 [2024-11-20 07:10:33.243681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.196 "name": "Existed_Raid", 00:13:51.196 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:51.196 "strip_size_kb": 64, 00:13:51.196 "state": "configuring", 00:13:51.196 "raid_level": "raid0", 00:13:51.196 "superblock": true, 00:13:51.196 "num_base_bdevs": 4, 00:13:51.196 "num_base_bdevs_discovered": 3, 00:13:51.196 "num_base_bdevs_operational": 4, 00:13:51.196 "base_bdevs_list": [ 00:13:51.196 { 00:13:51.196 "name": "BaseBdev1", 00:13:51.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.196 "is_configured": false, 00:13:51.196 "data_offset": 0, 00:13:51.196 "data_size": 0 00:13:51.196 }, 00:13:51.196 { 00:13:51.196 "name": "BaseBdev2", 00:13:51.196 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:51.196 "is_configured": true, 00:13:51.196 "data_offset": 2048, 00:13:51.196 "data_size": 63488 00:13:51.196 }, 00:13:51.196 { 00:13:51.196 "name": "BaseBdev3", 00:13:51.196 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:51.196 "is_configured": true, 00:13:51.196 "data_offset": 2048, 00:13:51.196 "data_size": 63488 00:13:51.196 }, 00:13:51.196 { 00:13:51.196 "name": "BaseBdev4", 00:13:51.196 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:51.196 "is_configured": true, 00:13:51.196 "data_offset": 2048, 00:13:51.196 "data_size": 63488 00:13:51.196 } 00:13:51.196 ] 00:13:51.196 }' 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.196 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.455 [2024-11-20 07:10:33.684059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.455 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.714 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.714 "name": "Existed_Raid", 00:13:51.714 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:51.714 "strip_size_kb": 64, 00:13:51.714 "state": "configuring", 00:13:51.714 "raid_level": "raid0", 00:13:51.714 "superblock": true, 00:13:51.714 "num_base_bdevs": 4, 00:13:51.714 "num_base_bdevs_discovered": 2, 00:13:51.714 "num_base_bdevs_operational": 4, 00:13:51.714 "base_bdevs_list": [ 00:13:51.714 { 00:13:51.714 "name": "BaseBdev1", 00:13:51.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.714 "is_configured": false, 00:13:51.714 "data_offset": 0, 00:13:51.714 "data_size": 0 00:13:51.714 }, 00:13:51.714 { 00:13:51.714 "name": null, 00:13:51.714 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:51.714 "is_configured": false, 00:13:51.714 "data_offset": 0, 00:13:51.714 "data_size": 63488 00:13:51.714 }, 00:13:51.714 { 00:13:51.714 "name": "BaseBdev3", 00:13:51.714 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:51.714 "is_configured": true, 00:13:51.714 "data_offset": 2048, 00:13:51.714 "data_size": 63488 00:13:51.714 }, 00:13:51.714 { 00:13:51.714 "name": "BaseBdev4", 00:13:51.714 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:51.714 "is_configured": true, 00:13:51.714 "data_offset": 2048, 00:13:51.714 "data_size": 63488 00:13:51.714 } 00:13:51.714 ] 00:13:51.714 }' 00:13:51.714 07:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.714 07:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.981 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:51.982 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.982 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.982 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.274 [2024-11-20 07:10:34.254477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.274 BaseBdev1 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.274 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.274 [ 00:13:52.274 { 00:13:52.274 "name": "BaseBdev1", 00:13:52.274 "aliases": [ 00:13:52.274 "e6bb4462-074a-4913-b98b-0ba7a7cbdb82" 00:13:52.274 ], 00:13:52.274 "product_name": "Malloc disk", 00:13:52.274 "block_size": 512, 00:13:52.274 "num_blocks": 65536, 00:13:52.274 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:52.274 "assigned_rate_limits": { 00:13:52.274 "rw_ios_per_sec": 0, 00:13:52.274 "rw_mbytes_per_sec": 0, 00:13:52.274 "r_mbytes_per_sec": 0, 00:13:52.274 "w_mbytes_per_sec": 0 00:13:52.274 }, 00:13:52.274 "claimed": true, 00:13:52.274 "claim_type": "exclusive_write", 00:13:52.274 "zoned": false, 00:13:52.275 "supported_io_types": { 00:13:52.275 "read": true, 00:13:52.275 "write": true, 00:13:52.275 "unmap": true, 00:13:52.275 "flush": true, 00:13:52.275 "reset": true, 00:13:52.275 "nvme_admin": false, 00:13:52.275 "nvme_io": false, 00:13:52.275 "nvme_io_md": false, 00:13:52.275 "write_zeroes": true, 00:13:52.275 "zcopy": true, 00:13:52.275 "get_zone_info": false, 00:13:52.275 "zone_management": false, 00:13:52.275 "zone_append": false, 00:13:52.275 "compare": false, 00:13:52.275 "compare_and_write": false, 00:13:52.275 "abort": true, 00:13:52.275 "seek_hole": false, 00:13:52.275 "seek_data": false, 00:13:52.275 "copy": true, 00:13:52.275 "nvme_iov_md": false 00:13:52.275 }, 00:13:52.275 "memory_domains": [ 00:13:52.275 { 00:13:52.275 "dma_device_id": "system", 00:13:52.275 "dma_device_type": 1 00:13:52.275 }, 00:13:52.275 { 00:13:52.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.275 "dma_device_type": 2 00:13:52.275 } 00:13:52.275 ], 00:13:52.275 "driver_specific": {} 00:13:52.275 } 00:13:52.275 ] 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.275 "name": "Existed_Raid", 00:13:52.275 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:52.275 "strip_size_kb": 64, 00:13:52.275 "state": "configuring", 00:13:52.275 "raid_level": "raid0", 00:13:52.275 "superblock": true, 00:13:52.275 "num_base_bdevs": 4, 00:13:52.275 "num_base_bdevs_discovered": 3, 00:13:52.275 "num_base_bdevs_operational": 4, 00:13:52.275 "base_bdevs_list": [ 00:13:52.275 { 00:13:52.275 "name": "BaseBdev1", 00:13:52.275 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:52.275 "is_configured": true, 00:13:52.275 "data_offset": 2048, 00:13:52.275 "data_size": 63488 00:13:52.275 }, 00:13:52.275 { 00:13:52.275 "name": null, 00:13:52.275 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:52.275 "is_configured": false, 00:13:52.275 "data_offset": 0, 00:13:52.275 "data_size": 63488 00:13:52.275 }, 00:13:52.275 { 00:13:52.275 "name": "BaseBdev3", 00:13:52.275 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:52.275 "is_configured": true, 00:13:52.275 "data_offset": 2048, 00:13:52.275 "data_size": 63488 00:13:52.275 }, 00:13:52.275 { 00:13:52.275 "name": "BaseBdev4", 00:13:52.275 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:52.275 "is_configured": true, 00:13:52.275 "data_offset": 2048, 00:13:52.275 "data_size": 63488 00:13:52.275 } 00:13:52.275 ] 00:13:52.275 }' 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.275 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.534 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.534 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.534 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.534 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.534 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.793 [2024-11-20 07:10:34.809639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.793 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.793 "name": "Existed_Raid", 00:13:52.793 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:52.793 "strip_size_kb": 64, 00:13:52.793 "state": "configuring", 00:13:52.793 "raid_level": "raid0", 00:13:52.793 "superblock": true, 00:13:52.793 "num_base_bdevs": 4, 00:13:52.793 "num_base_bdevs_discovered": 2, 00:13:52.793 "num_base_bdevs_operational": 4, 00:13:52.793 "base_bdevs_list": [ 00:13:52.793 { 00:13:52.793 "name": "BaseBdev1", 00:13:52.793 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:52.793 "is_configured": true, 00:13:52.793 "data_offset": 2048, 00:13:52.793 "data_size": 63488 00:13:52.793 }, 00:13:52.793 { 00:13:52.793 "name": null, 00:13:52.793 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:52.793 "is_configured": false, 00:13:52.793 "data_offset": 0, 00:13:52.793 "data_size": 63488 00:13:52.793 }, 00:13:52.793 { 00:13:52.793 "name": null, 00:13:52.794 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:52.794 "is_configured": false, 00:13:52.794 "data_offset": 0, 00:13:52.794 "data_size": 63488 00:13:52.794 }, 00:13:52.794 { 00:13:52.794 "name": "BaseBdev4", 00:13:52.794 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:52.794 "is_configured": true, 00:13:52.794 "data_offset": 2048, 00:13:52.794 "data_size": 63488 00:13:52.794 } 00:13:52.794 ] 00:13:52.794 }' 00:13:52.794 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.794 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.052 [2024-11-20 07:10:35.308822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.052 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.312 "name": "Existed_Raid", 00:13:53.312 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:53.312 "strip_size_kb": 64, 00:13:53.312 "state": "configuring", 00:13:53.312 "raid_level": "raid0", 00:13:53.312 "superblock": true, 00:13:53.312 "num_base_bdevs": 4, 00:13:53.312 "num_base_bdevs_discovered": 3, 00:13:53.312 "num_base_bdevs_operational": 4, 00:13:53.312 "base_bdevs_list": [ 00:13:53.312 { 00:13:53.312 "name": "BaseBdev1", 00:13:53.312 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:53.312 "is_configured": true, 00:13:53.312 "data_offset": 2048, 00:13:53.312 "data_size": 63488 00:13:53.312 }, 00:13:53.312 { 00:13:53.312 "name": null, 00:13:53.312 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:53.312 "is_configured": false, 00:13:53.312 "data_offset": 0, 00:13:53.312 "data_size": 63488 00:13:53.312 }, 00:13:53.312 { 00:13:53.312 "name": "BaseBdev3", 00:13:53.312 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:53.312 "is_configured": true, 00:13:53.312 "data_offset": 2048, 00:13:53.312 "data_size": 63488 00:13:53.312 }, 00:13:53.312 { 00:13:53.312 "name": "BaseBdev4", 00:13:53.312 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:53.312 "is_configured": true, 00:13:53.312 "data_offset": 2048, 00:13:53.312 "data_size": 63488 00:13:53.312 } 00:13:53.312 ] 00:13:53.312 }' 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.312 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.571 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.571 [2024-11-20 07:10:35.776049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.830 "name": "Existed_Raid", 00:13:53.830 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:53.830 "strip_size_kb": 64, 00:13:53.830 "state": "configuring", 00:13:53.830 "raid_level": "raid0", 00:13:53.830 "superblock": true, 00:13:53.830 "num_base_bdevs": 4, 00:13:53.830 "num_base_bdevs_discovered": 2, 00:13:53.830 "num_base_bdevs_operational": 4, 00:13:53.830 "base_bdevs_list": [ 00:13:53.830 { 00:13:53.830 "name": null, 00:13:53.830 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:53.830 "is_configured": false, 00:13:53.830 "data_offset": 0, 00:13:53.830 "data_size": 63488 00:13:53.830 }, 00:13:53.830 { 00:13:53.830 "name": null, 00:13:53.830 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:53.830 "is_configured": false, 00:13:53.830 "data_offset": 0, 00:13:53.830 "data_size": 63488 00:13:53.830 }, 00:13:53.830 { 00:13:53.830 "name": "BaseBdev3", 00:13:53.830 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:53.830 "is_configured": true, 00:13:53.830 "data_offset": 2048, 00:13:53.830 "data_size": 63488 00:13:53.830 }, 00:13:53.830 { 00:13:53.830 "name": "BaseBdev4", 00:13:53.830 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:53.830 "is_configured": true, 00:13:53.830 "data_offset": 2048, 00:13:53.830 "data_size": 63488 00:13:53.830 } 00:13:53.830 ] 00:13:53.830 }' 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.830 07:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.089 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.089 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.089 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.089 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.347 [2024-11-20 07:10:36.402555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.347 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.347 "name": "Existed_Raid", 00:13:54.347 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:54.347 "strip_size_kb": 64, 00:13:54.347 "state": "configuring", 00:13:54.347 "raid_level": "raid0", 00:13:54.347 "superblock": true, 00:13:54.347 "num_base_bdevs": 4, 00:13:54.347 "num_base_bdevs_discovered": 3, 00:13:54.347 "num_base_bdevs_operational": 4, 00:13:54.347 "base_bdevs_list": [ 00:13:54.348 { 00:13:54.348 "name": null, 00:13:54.348 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:54.348 "is_configured": false, 00:13:54.348 "data_offset": 0, 00:13:54.348 "data_size": 63488 00:13:54.348 }, 00:13:54.348 { 00:13:54.348 "name": "BaseBdev2", 00:13:54.348 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:54.348 "is_configured": true, 00:13:54.348 "data_offset": 2048, 00:13:54.348 "data_size": 63488 00:13:54.348 }, 00:13:54.348 { 00:13:54.348 "name": "BaseBdev3", 00:13:54.348 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:54.348 "is_configured": true, 00:13:54.348 "data_offset": 2048, 00:13:54.348 "data_size": 63488 00:13:54.348 }, 00:13:54.348 { 00:13:54.348 "name": "BaseBdev4", 00:13:54.348 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:54.348 "is_configured": true, 00:13:54.348 "data_offset": 2048, 00:13:54.348 "data_size": 63488 00:13:54.348 } 00:13:54.348 ] 00:13:54.348 }' 00:13:54.348 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.348 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.606 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.606 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.606 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.606 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.606 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6bb4462-074a-4913-b98b-0ba7a7cbdb82 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.863 [2024-11-20 07:10:36.986227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:54.863 [2024-11-20 07:10:36.986612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:54.863 [2024-11-20 07:10:36.986668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:54.863 NewBaseBdev 00:13:54.863 [2024-11-20 07:10:36.986983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:54.863 [2024-11-20 07:10:36.987148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:54.863 [2024-11-20 07:10:36.987163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:54.863 [2024-11-20 07:10:36.987316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.863 07:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.864 [ 00:13:54.864 { 00:13:54.864 "name": "NewBaseBdev", 00:13:54.864 "aliases": [ 00:13:54.864 "e6bb4462-074a-4913-b98b-0ba7a7cbdb82" 00:13:54.864 ], 00:13:54.864 "product_name": "Malloc disk", 00:13:54.864 "block_size": 512, 00:13:54.864 "num_blocks": 65536, 00:13:54.864 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:54.864 "assigned_rate_limits": { 00:13:54.864 "rw_ios_per_sec": 0, 00:13:54.864 "rw_mbytes_per_sec": 0, 00:13:54.864 "r_mbytes_per_sec": 0, 00:13:54.864 "w_mbytes_per_sec": 0 00:13:54.864 }, 00:13:54.864 "claimed": true, 00:13:54.864 "claim_type": "exclusive_write", 00:13:54.864 "zoned": false, 00:13:54.864 "supported_io_types": { 00:13:54.864 "read": true, 00:13:54.864 "write": true, 00:13:54.864 "unmap": true, 00:13:54.864 "flush": true, 00:13:54.864 "reset": true, 00:13:54.864 "nvme_admin": false, 00:13:54.864 "nvme_io": false, 00:13:54.864 "nvme_io_md": false, 00:13:54.864 "write_zeroes": true, 00:13:54.864 "zcopy": true, 00:13:54.864 "get_zone_info": false, 00:13:54.864 "zone_management": false, 00:13:54.864 "zone_append": false, 00:13:54.864 "compare": false, 00:13:54.864 "compare_and_write": false, 00:13:54.864 "abort": true, 00:13:54.864 "seek_hole": false, 00:13:54.864 "seek_data": false, 00:13:54.864 "copy": true, 00:13:54.864 "nvme_iov_md": false 00:13:54.864 }, 00:13:54.864 "memory_domains": [ 00:13:54.864 { 00:13:54.864 "dma_device_id": "system", 00:13:54.864 "dma_device_type": 1 00:13:54.864 }, 00:13:54.864 { 00:13:54.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.864 "dma_device_type": 2 00:13:54.864 } 00:13:54.864 ], 00:13:54.864 "driver_specific": {} 00:13:54.864 } 00:13:54.864 ] 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.864 "name": "Existed_Raid", 00:13:54.864 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:54.864 "strip_size_kb": 64, 00:13:54.864 "state": "online", 00:13:54.864 "raid_level": "raid0", 00:13:54.864 "superblock": true, 00:13:54.864 "num_base_bdevs": 4, 00:13:54.864 "num_base_bdevs_discovered": 4, 00:13:54.864 "num_base_bdevs_operational": 4, 00:13:54.864 "base_bdevs_list": [ 00:13:54.864 { 00:13:54.864 "name": "NewBaseBdev", 00:13:54.864 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:54.864 "is_configured": true, 00:13:54.864 "data_offset": 2048, 00:13:54.864 "data_size": 63488 00:13:54.864 }, 00:13:54.864 { 00:13:54.864 "name": "BaseBdev2", 00:13:54.864 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:54.864 "is_configured": true, 00:13:54.864 "data_offset": 2048, 00:13:54.864 "data_size": 63488 00:13:54.864 }, 00:13:54.864 { 00:13:54.864 "name": "BaseBdev3", 00:13:54.864 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:54.864 "is_configured": true, 00:13:54.864 "data_offset": 2048, 00:13:54.864 "data_size": 63488 00:13:54.864 }, 00:13:54.864 { 00:13:54.864 "name": "BaseBdev4", 00:13:54.864 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:54.864 "is_configured": true, 00:13:54.864 "data_offset": 2048, 00:13:54.864 "data_size": 63488 00:13:54.864 } 00:13:54.864 ] 00:13:54.864 }' 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.864 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.431 [2024-11-20 07:10:37.465921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.431 "name": "Existed_Raid", 00:13:55.431 "aliases": [ 00:13:55.431 "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa" 00:13:55.431 ], 00:13:55.431 "product_name": "Raid Volume", 00:13:55.431 "block_size": 512, 00:13:55.431 "num_blocks": 253952, 00:13:55.431 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:55.431 "assigned_rate_limits": { 00:13:55.431 "rw_ios_per_sec": 0, 00:13:55.431 "rw_mbytes_per_sec": 0, 00:13:55.431 "r_mbytes_per_sec": 0, 00:13:55.431 "w_mbytes_per_sec": 0 00:13:55.431 }, 00:13:55.431 "claimed": false, 00:13:55.431 "zoned": false, 00:13:55.431 "supported_io_types": { 00:13:55.431 "read": true, 00:13:55.431 "write": true, 00:13:55.431 "unmap": true, 00:13:55.431 "flush": true, 00:13:55.431 "reset": true, 00:13:55.431 "nvme_admin": false, 00:13:55.431 "nvme_io": false, 00:13:55.431 "nvme_io_md": false, 00:13:55.431 "write_zeroes": true, 00:13:55.431 "zcopy": false, 00:13:55.431 "get_zone_info": false, 00:13:55.431 "zone_management": false, 00:13:55.431 "zone_append": false, 00:13:55.431 "compare": false, 00:13:55.431 "compare_and_write": false, 00:13:55.431 "abort": false, 00:13:55.431 "seek_hole": false, 00:13:55.431 "seek_data": false, 00:13:55.431 "copy": false, 00:13:55.431 "nvme_iov_md": false 00:13:55.431 }, 00:13:55.431 "memory_domains": [ 00:13:55.431 { 00:13:55.431 "dma_device_id": "system", 00:13:55.431 "dma_device_type": 1 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.431 "dma_device_type": 2 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "system", 00:13:55.431 "dma_device_type": 1 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.431 "dma_device_type": 2 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "system", 00:13:55.431 "dma_device_type": 1 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.431 "dma_device_type": 2 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "system", 00:13:55.431 "dma_device_type": 1 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.431 "dma_device_type": 2 00:13:55.431 } 00:13:55.431 ], 00:13:55.431 "driver_specific": { 00:13:55.431 "raid": { 00:13:55.431 "uuid": "fddfe84b-ba2e-4629-9aad-c7c8fc8485aa", 00:13:55.431 "strip_size_kb": 64, 00:13:55.431 "state": "online", 00:13:55.431 "raid_level": "raid0", 00:13:55.431 "superblock": true, 00:13:55.431 "num_base_bdevs": 4, 00:13:55.431 "num_base_bdevs_discovered": 4, 00:13:55.431 "num_base_bdevs_operational": 4, 00:13:55.431 "base_bdevs_list": [ 00:13:55.431 { 00:13:55.431 "name": "NewBaseBdev", 00:13:55.431 "uuid": "e6bb4462-074a-4913-b98b-0ba7a7cbdb82", 00:13:55.431 "is_configured": true, 00:13:55.431 "data_offset": 2048, 00:13:55.431 "data_size": 63488 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "name": "BaseBdev2", 00:13:55.431 "uuid": "2777025b-2a68-4b61-9c14-73ee94727cbf", 00:13:55.431 "is_configured": true, 00:13:55.431 "data_offset": 2048, 00:13:55.431 "data_size": 63488 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "name": "BaseBdev3", 00:13:55.431 "uuid": "6256d322-b3f7-4c50-b7eb-c3663f9adfc8", 00:13:55.431 "is_configured": true, 00:13:55.431 "data_offset": 2048, 00:13:55.431 "data_size": 63488 00:13:55.431 }, 00:13:55.431 { 00:13:55.431 "name": "BaseBdev4", 00:13:55.431 "uuid": "2fe8ba97-087c-4b9e-89c4-a626efab8daa", 00:13:55.431 "is_configured": true, 00:13:55.431 "data_offset": 2048, 00:13:55.431 "data_size": 63488 00:13:55.431 } 00:13:55.431 ] 00:13:55.431 } 00:13:55.431 } 00:13:55.431 }' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:55.431 BaseBdev2 00:13:55.431 BaseBdev3 00:13:55.431 BaseBdev4' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.431 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.689 [2024-11-20 07:10:37.789268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.689 [2024-11-20 07:10:37.789372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.689 [2024-11-20 07:10:37.789472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.689 [2024-11-20 07:10:37.789553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.689 [2024-11-20 07:10:37.789565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70370 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70370 ']' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70370 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70370 00:13:55.689 killing process with pid 70370 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.689 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70370' 00:13:55.690 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70370 00:13:55.690 [2024-11-20 07:10:37.834977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.690 07:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70370 00:13:56.257 [2024-11-20 07:10:38.289199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.633 ************************************ 00:13:57.633 END TEST raid_state_function_test_sb 00:13:57.633 ************************************ 00:13:57.633 07:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:57.633 00:13:57.633 real 0m12.010s 00:13:57.633 user 0m18.908s 00:13:57.633 sys 0m2.077s 00:13:57.633 07:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.633 07:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.633 07:10:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:57.633 07:10:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:57.633 07:10:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.634 07:10:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.634 ************************************ 00:13:57.634 START TEST raid_superblock_test 00:13:57.634 ************************************ 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71049 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71049 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71049 ']' 00:13:57.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.634 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.634 [2024-11-20 07:10:39.710401] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:57.634 [2024-11-20 07:10:39.711062] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:13:57.634 [2024-11-20 07:10:39.886612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.893 [2024-11-20 07:10:40.010248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.154 [2024-11-20 07:10:40.214360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.154 [2024-11-20 07:10:40.214413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 malloc1 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 [2024-11-20 07:10:40.623857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:58.413 [2024-11-20 07:10:40.623973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.413 [2024-11-20 07:10:40.624036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.413 [2024-11-20 07:10:40.624085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.413 [2024-11-20 07:10:40.626492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.413 [2024-11-20 07:10:40.626533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:58.413 pt1 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:58.413 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.414 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.414 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.414 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:58.414 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.414 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 malloc2 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 [2024-11-20 07:10:40.685260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:58.675 [2024-11-20 07:10:40.685419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.675 [2024-11-20 07:10:40.685482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:58.675 [2024-11-20 07:10:40.685521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.675 [2024-11-20 07:10:40.687782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.675 [2024-11-20 07:10:40.687859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:58.675 pt2 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 malloc3 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 [2024-11-20 07:10:40.760686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:58.675 [2024-11-20 07:10:40.760793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.675 [2024-11-20 07:10:40.760838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:58.675 [2024-11-20 07:10:40.760876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.675 [2024-11-20 07:10:40.763252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.675 [2024-11-20 07:10:40.763352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:58.675 pt3 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 malloc4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 [2024-11-20 07:10:40.821904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:58.675 [2024-11-20 07:10:40.822016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.675 [2024-11-20 07:10:40.822055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:58.675 [2024-11-20 07:10:40.822100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.675 [2024-11-20 07:10:40.824188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.675 [2024-11-20 07:10:40.824263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:58.675 pt4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 [2024-11-20 07:10:40.833921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:58.675 [2024-11-20 07:10:40.835787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:58.675 [2024-11-20 07:10:40.835897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:58.675 [2024-11-20 07:10:40.835993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:58.675 [2024-11-20 07:10:40.836213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:58.675 [2024-11-20 07:10:40.836258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:58.675 [2024-11-20 07:10:40.836579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:58.675 [2024-11-20 07:10:40.836795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:58.675 [2024-11-20 07:10:40.836843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:58.675 [2024-11-20 07:10:40.837060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.675 "name": "raid_bdev1", 00:13:58.675 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:13:58.675 "strip_size_kb": 64, 00:13:58.675 "state": "online", 00:13:58.676 "raid_level": "raid0", 00:13:58.676 "superblock": true, 00:13:58.676 "num_base_bdevs": 4, 00:13:58.676 "num_base_bdevs_discovered": 4, 00:13:58.676 "num_base_bdevs_operational": 4, 00:13:58.676 "base_bdevs_list": [ 00:13:58.676 { 00:13:58.676 "name": "pt1", 00:13:58.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": "pt2", 00:13:58.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": "pt3", 00:13:58.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 }, 00:13:58.676 { 00:13:58.676 "name": "pt4", 00:13:58.676 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.676 "is_configured": true, 00:13:58.676 "data_offset": 2048, 00:13:58.676 "data_size": 63488 00:13:58.676 } 00:13:58.676 ] 00:13:58.676 }' 00:13:58.676 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.676 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.245 [2024-11-20 07:10:41.265597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.245 "name": "raid_bdev1", 00:13:59.245 "aliases": [ 00:13:59.245 "71980b6b-9441-4124-87e6-52906cc8d3c7" 00:13:59.245 ], 00:13:59.245 "product_name": "Raid Volume", 00:13:59.245 "block_size": 512, 00:13:59.245 "num_blocks": 253952, 00:13:59.245 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:13:59.245 "assigned_rate_limits": { 00:13:59.245 "rw_ios_per_sec": 0, 00:13:59.245 "rw_mbytes_per_sec": 0, 00:13:59.245 "r_mbytes_per_sec": 0, 00:13:59.245 "w_mbytes_per_sec": 0 00:13:59.245 }, 00:13:59.245 "claimed": false, 00:13:59.245 "zoned": false, 00:13:59.245 "supported_io_types": { 00:13:59.245 "read": true, 00:13:59.245 "write": true, 00:13:59.245 "unmap": true, 00:13:59.245 "flush": true, 00:13:59.245 "reset": true, 00:13:59.245 "nvme_admin": false, 00:13:59.245 "nvme_io": false, 00:13:59.245 "nvme_io_md": false, 00:13:59.245 "write_zeroes": true, 00:13:59.245 "zcopy": false, 00:13:59.245 "get_zone_info": false, 00:13:59.245 "zone_management": false, 00:13:59.245 "zone_append": false, 00:13:59.245 "compare": false, 00:13:59.245 "compare_and_write": false, 00:13:59.245 "abort": false, 00:13:59.245 "seek_hole": false, 00:13:59.245 "seek_data": false, 00:13:59.245 "copy": false, 00:13:59.245 "nvme_iov_md": false 00:13:59.245 }, 00:13:59.245 "memory_domains": [ 00:13:59.245 { 00:13:59.245 "dma_device_id": "system", 00:13:59.245 "dma_device_type": 1 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.245 "dma_device_type": 2 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "system", 00:13:59.245 "dma_device_type": 1 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.245 "dma_device_type": 2 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "system", 00:13:59.245 "dma_device_type": 1 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.245 "dma_device_type": 2 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "system", 00:13:59.245 "dma_device_type": 1 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.245 "dma_device_type": 2 00:13:59.245 } 00:13:59.245 ], 00:13:59.245 "driver_specific": { 00:13:59.245 "raid": { 00:13:59.245 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:13:59.245 "strip_size_kb": 64, 00:13:59.245 "state": "online", 00:13:59.245 "raid_level": "raid0", 00:13:59.245 "superblock": true, 00:13:59.245 "num_base_bdevs": 4, 00:13:59.245 "num_base_bdevs_discovered": 4, 00:13:59.245 "num_base_bdevs_operational": 4, 00:13:59.245 "base_bdevs_list": [ 00:13:59.245 { 00:13:59.245 "name": "pt1", 00:13:59.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.245 "is_configured": true, 00:13:59.245 "data_offset": 2048, 00:13:59.245 "data_size": 63488 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "name": "pt2", 00:13:59.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.245 "is_configured": true, 00:13:59.245 "data_offset": 2048, 00:13:59.245 "data_size": 63488 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "name": "pt3", 00:13:59.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.245 "is_configured": true, 00:13:59.245 "data_offset": 2048, 00:13:59.245 "data_size": 63488 00:13:59.245 }, 00:13:59.245 { 00:13:59.245 "name": "pt4", 00:13:59.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.245 "is_configured": true, 00:13:59.245 "data_offset": 2048, 00:13:59.245 "data_size": 63488 00:13:59.245 } 00:13:59.245 ] 00:13:59.245 } 00:13:59.245 } 00:13:59.245 }' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:59.245 pt2 00:13:59.245 pt3 00:13:59.245 pt4' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.245 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 [2024-11-20 07:10:41.605027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71980b6b-9441-4124-87e6-52906cc8d3c7 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71980b6b-9441-4124-87e6-52906cc8d3c7 ']' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 [2024-11-20 07:10:41.636646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.505 [2024-11-20 07:10:41.636676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.505 [2024-11-20 07:10:41.636770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.505 [2024-11-20 07:10:41.636843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.505 [2024-11-20 07:10:41.636859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.505 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.764 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.764 [2024-11-20 07:10:41.800439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:59.764 [2024-11-20 07:10:41.802545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:59.764 [2024-11-20 07:10:41.802676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:59.764 [2024-11-20 07:10:41.802755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:59.764 [2024-11-20 07:10:41.802852] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:59.764 [2024-11-20 07:10:41.802965] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:59.764 [2024-11-20 07:10:41.803046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:59.764 [2024-11-20 07:10:41.803111] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:59.764 [2024-11-20 07:10:41.803174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.764 [2024-11-20 07:10:41.803217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:59.764 request: 00:13:59.764 { 00:13:59.764 "name": "raid_bdev1", 00:13:59.764 "raid_level": "raid0", 00:13:59.764 "base_bdevs": [ 00:13:59.765 "malloc1", 00:13:59.765 "malloc2", 00:13:59.765 "malloc3", 00:13:59.765 "malloc4" 00:13:59.765 ], 00:13:59.765 "strip_size_kb": 64, 00:13:59.765 "superblock": false, 00:13:59.765 "method": "bdev_raid_create", 00:13:59.765 "req_id": 1 00:13:59.765 } 00:13:59.765 Got JSON-RPC error response 00:13:59.765 response: 00:13:59.765 { 00:13:59.765 "code": -17, 00:13:59.765 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:59.765 } 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.765 [2024-11-20 07:10:41.864281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.765 [2024-11-20 07:10:41.864370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.765 [2024-11-20 07:10:41.864391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:59.765 [2024-11-20 07:10:41.864402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.765 [2024-11-20 07:10:41.866572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.765 [2024-11-20 07:10:41.866659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.765 [2024-11-20 07:10:41.866759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:59.765 [2024-11-20 07:10:41.866825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.765 pt1 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.765 "name": "raid_bdev1", 00:13:59.765 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:13:59.765 "strip_size_kb": 64, 00:13:59.765 "state": "configuring", 00:13:59.765 "raid_level": "raid0", 00:13:59.765 "superblock": true, 00:13:59.765 "num_base_bdevs": 4, 00:13:59.765 "num_base_bdevs_discovered": 1, 00:13:59.765 "num_base_bdevs_operational": 4, 00:13:59.765 "base_bdevs_list": [ 00:13:59.765 { 00:13:59.765 "name": "pt1", 00:13:59.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.765 "is_configured": true, 00:13:59.765 "data_offset": 2048, 00:13:59.765 "data_size": 63488 00:13:59.765 }, 00:13:59.765 { 00:13:59.765 "name": null, 00:13:59.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.765 "is_configured": false, 00:13:59.765 "data_offset": 2048, 00:13:59.765 "data_size": 63488 00:13:59.765 }, 00:13:59.765 { 00:13:59.765 "name": null, 00:13:59.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.765 "is_configured": false, 00:13:59.765 "data_offset": 2048, 00:13:59.765 "data_size": 63488 00:13:59.765 }, 00:13:59.765 { 00:13:59.765 "name": null, 00:13:59.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.765 "is_configured": false, 00:13:59.765 "data_offset": 2048, 00:13:59.765 "data_size": 63488 00:13:59.765 } 00:13:59.765 ] 00:13:59.765 }' 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.765 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 [2024-11-20 07:10:42.319519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.334 [2024-11-20 07:10:42.319669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.334 [2024-11-20 07:10:42.319711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:00.334 [2024-11-20 07:10:42.319767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.334 [2024-11-20 07:10:42.320273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.334 [2024-11-20 07:10:42.320343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.334 [2024-11-20 07:10:42.320462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.334 [2024-11-20 07:10:42.320518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.334 pt2 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 [2024-11-20 07:10:42.331499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.334 "name": "raid_bdev1", 00:14:00.334 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:14:00.334 "strip_size_kb": 64, 00:14:00.334 "state": "configuring", 00:14:00.334 "raid_level": "raid0", 00:14:00.334 "superblock": true, 00:14:00.334 "num_base_bdevs": 4, 00:14:00.334 "num_base_bdevs_discovered": 1, 00:14:00.334 "num_base_bdevs_operational": 4, 00:14:00.334 "base_bdevs_list": [ 00:14:00.334 { 00:14:00.334 "name": "pt1", 00:14:00.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.334 "is_configured": true, 00:14:00.334 "data_offset": 2048, 00:14:00.334 "data_size": 63488 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "name": null, 00:14:00.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.334 "is_configured": false, 00:14:00.334 "data_offset": 0, 00:14:00.334 "data_size": 63488 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "name": null, 00:14:00.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.334 "is_configured": false, 00:14:00.334 "data_offset": 2048, 00:14:00.334 "data_size": 63488 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "name": null, 00:14:00.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.334 "is_configured": false, 00:14:00.334 "data_offset": 2048, 00:14:00.334 "data_size": 63488 00:14:00.334 } 00:14:00.334 ] 00:14:00.334 }' 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.334 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.594 [2024-11-20 07:10:42.802696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.594 [2024-11-20 07:10:42.802825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.594 [2024-11-20 07:10:42.802868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:00.594 [2024-11-20 07:10:42.802900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.594 [2024-11-20 07:10:42.803451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.594 [2024-11-20 07:10:42.803518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.594 [2024-11-20 07:10:42.803644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.594 [2024-11-20 07:10:42.803700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.594 pt2 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.594 [2024-11-20 07:10:42.814633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:00.594 [2024-11-20 07:10:42.814690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.594 [2024-11-20 07:10:42.814719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:00.594 [2024-11-20 07:10:42.814730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.594 [2024-11-20 07:10:42.815159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.594 [2024-11-20 07:10:42.815176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:00.594 [2024-11-20 07:10:42.815257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:00.594 [2024-11-20 07:10:42.815278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:00.594 pt3 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.594 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.594 [2024-11-20 07:10:42.826622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:00.594 [2024-11-20 07:10:42.826701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.594 [2024-11-20 07:10:42.826735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:00.594 [2024-11-20 07:10:42.826750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.594 [2024-11-20 07:10:42.827251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.594 [2024-11-20 07:10:42.827280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:00.594 [2024-11-20 07:10:42.827376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:00.594 [2024-11-20 07:10:42.827416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:00.595 [2024-11-20 07:10:42.827566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:00.595 [2024-11-20 07:10:42.827581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:00.595 [2024-11-20 07:10:42.827857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:00.595 [2024-11-20 07:10:42.828040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:00.595 [2024-11-20 07:10:42.828055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:00.595 [2024-11-20 07:10:42.828204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.595 pt4 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.595 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.854 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.854 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.854 "name": "raid_bdev1", 00:14:00.854 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:14:00.854 "strip_size_kb": 64, 00:14:00.854 "state": "online", 00:14:00.854 "raid_level": "raid0", 00:14:00.854 "superblock": true, 00:14:00.854 "num_base_bdevs": 4, 00:14:00.854 "num_base_bdevs_discovered": 4, 00:14:00.854 "num_base_bdevs_operational": 4, 00:14:00.854 "base_bdevs_list": [ 00:14:00.854 { 00:14:00.854 "name": "pt1", 00:14:00.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.854 "is_configured": true, 00:14:00.854 "data_offset": 2048, 00:14:00.854 "data_size": 63488 00:14:00.854 }, 00:14:00.854 { 00:14:00.854 "name": "pt2", 00:14:00.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.854 "is_configured": true, 00:14:00.854 "data_offset": 2048, 00:14:00.854 "data_size": 63488 00:14:00.854 }, 00:14:00.854 { 00:14:00.854 "name": "pt3", 00:14:00.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.854 "is_configured": true, 00:14:00.854 "data_offset": 2048, 00:14:00.854 "data_size": 63488 00:14:00.854 }, 00:14:00.854 { 00:14:00.855 "name": "pt4", 00:14:00.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.855 "is_configured": true, 00:14:00.855 "data_offset": 2048, 00:14:00.855 "data_size": 63488 00:14:00.855 } 00:14:00.855 ] 00:14:00.855 }' 00:14:00.855 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.855 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.114 [2024-11-20 07:10:43.322171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.114 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.114 "name": "raid_bdev1", 00:14:01.114 "aliases": [ 00:14:01.114 "71980b6b-9441-4124-87e6-52906cc8d3c7" 00:14:01.114 ], 00:14:01.114 "product_name": "Raid Volume", 00:14:01.114 "block_size": 512, 00:14:01.114 "num_blocks": 253952, 00:14:01.114 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:14:01.114 "assigned_rate_limits": { 00:14:01.114 "rw_ios_per_sec": 0, 00:14:01.114 "rw_mbytes_per_sec": 0, 00:14:01.114 "r_mbytes_per_sec": 0, 00:14:01.114 "w_mbytes_per_sec": 0 00:14:01.114 }, 00:14:01.114 "claimed": false, 00:14:01.114 "zoned": false, 00:14:01.114 "supported_io_types": { 00:14:01.114 "read": true, 00:14:01.114 "write": true, 00:14:01.114 "unmap": true, 00:14:01.114 "flush": true, 00:14:01.114 "reset": true, 00:14:01.114 "nvme_admin": false, 00:14:01.114 "nvme_io": false, 00:14:01.114 "nvme_io_md": false, 00:14:01.114 "write_zeroes": true, 00:14:01.114 "zcopy": false, 00:14:01.114 "get_zone_info": false, 00:14:01.114 "zone_management": false, 00:14:01.114 "zone_append": false, 00:14:01.114 "compare": false, 00:14:01.114 "compare_and_write": false, 00:14:01.114 "abort": false, 00:14:01.114 "seek_hole": false, 00:14:01.114 "seek_data": false, 00:14:01.114 "copy": false, 00:14:01.114 "nvme_iov_md": false 00:14:01.114 }, 00:14:01.114 "memory_domains": [ 00:14:01.114 { 00:14:01.114 "dma_device_id": "system", 00:14:01.114 "dma_device_type": 1 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.114 "dma_device_type": 2 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "system", 00:14:01.114 "dma_device_type": 1 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.114 "dma_device_type": 2 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "system", 00:14:01.114 "dma_device_type": 1 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.114 "dma_device_type": 2 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "system", 00:14:01.114 "dma_device_type": 1 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.114 "dma_device_type": 2 00:14:01.114 } 00:14:01.114 ], 00:14:01.114 "driver_specific": { 00:14:01.114 "raid": { 00:14:01.114 "uuid": "71980b6b-9441-4124-87e6-52906cc8d3c7", 00:14:01.114 "strip_size_kb": 64, 00:14:01.114 "state": "online", 00:14:01.114 "raid_level": "raid0", 00:14:01.114 "superblock": true, 00:14:01.114 "num_base_bdevs": 4, 00:14:01.114 "num_base_bdevs_discovered": 4, 00:14:01.114 "num_base_bdevs_operational": 4, 00:14:01.114 "base_bdevs_list": [ 00:14:01.114 { 00:14:01.114 "name": "pt1", 00:14:01.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.114 "is_configured": true, 00:14:01.114 "data_offset": 2048, 00:14:01.114 "data_size": 63488 00:14:01.114 }, 00:14:01.114 { 00:14:01.114 "name": "pt2", 00:14:01.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.115 "is_configured": true, 00:14:01.115 "data_offset": 2048, 00:14:01.115 "data_size": 63488 00:14:01.115 }, 00:14:01.115 { 00:14:01.115 "name": "pt3", 00:14:01.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.115 "is_configured": true, 00:14:01.115 "data_offset": 2048, 00:14:01.115 "data_size": 63488 00:14:01.115 }, 00:14:01.115 { 00:14:01.115 "name": "pt4", 00:14:01.115 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.115 "is_configured": true, 00:14:01.115 "data_offset": 2048, 00:14:01.115 "data_size": 63488 00:14:01.115 } 00:14:01.115 ] 00:14:01.115 } 00:14:01.115 } 00:14:01.115 }' 00:14:01.115 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:01.375 pt2 00:14:01.375 pt3 00:14:01.375 pt4' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:01.634 [2024-11-20 07:10:43.641701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71980b6b-9441-4124-87e6-52906cc8d3c7 '!=' 71980b6b-9441-4124-87e6-52906cc8d3c7 ']' 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71049 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71049 ']' 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71049 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71049 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71049' 00:14:01.635 killing process with pid 71049 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71049 00:14:01.635 [2024-11-20 07:10:43.711117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.635 07:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71049 00:14:01.635 [2024-11-20 07:10:43.711553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.635 [2024-11-20 07:10:43.711917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.635 [2024-11-20 07:10:43.712115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:01.894 [2024-11-20 07:10:44.144892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.271 07:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:03.271 00:14:03.271 real 0m5.694s 00:14:03.271 user 0m8.152s 00:14:03.271 sys 0m0.907s 00:14:03.271 07:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.271 07:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 ************************************ 00:14:03.271 END TEST raid_superblock_test 00:14:03.271 ************************************ 00:14:03.271 07:10:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:03.271 07:10:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:03.271 07:10:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.271 07:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 ************************************ 00:14:03.271 START TEST raid_read_error_test 00:14:03.271 ************************************ 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YeUHarTGm1 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71308 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71308 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71308 ']' 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.271 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 [2024-11-20 07:10:45.490956] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:03.272 [2024-11-20 07:10:45.491251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71308 ] 00:14:03.531 [2024-11-20 07:10:45.678938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.791 [2024-11-20 07:10:45.807939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.791 [2024-11-20 07:10:46.039210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.791 [2024-11-20 07:10:46.039275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.361 BaseBdev1_malloc 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.361 true 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.361 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 [2024-11-20 07:10:46.413964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:04.362 [2024-11-20 07:10:46.414019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.362 [2024-11-20 07:10:46.414039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:04.362 [2024-11-20 07:10:46.414050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.362 [2024-11-20 07:10:46.416133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.362 [2024-11-20 07:10:46.416175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.362 BaseBdev1 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 BaseBdev2_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 true 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 [2024-11-20 07:10:46.476188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:04.362 [2024-11-20 07:10:46.476252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.362 [2024-11-20 07:10:46.476271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:04.362 [2024-11-20 07:10:46.476283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.362 [2024-11-20 07:10:46.478583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.362 [2024-11-20 07:10:46.478623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.362 BaseBdev2 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 BaseBdev3_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 true 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 [2024-11-20 07:10:46.557780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:04.362 [2024-11-20 07:10:46.557844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.362 [2024-11-20 07:10:46.557866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:04.362 [2024-11-20 07:10:46.557878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.362 [2024-11-20 07:10:46.560215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.362 [2024-11-20 07:10:46.560310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.362 BaseBdev3 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.362 BaseBdev4_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.362 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 true 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 [2024-11-20 07:10:46.634602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:04.623 [2024-11-20 07:10:46.634682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.623 [2024-11-20 07:10:46.634708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:04.623 [2024-11-20 07:10:46.634720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.623 [2024-11-20 07:10:46.637213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.623 [2024-11-20 07:10:46.637264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.623 BaseBdev4 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 [2024-11-20 07:10:46.646653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.623 [2024-11-20 07:10:46.648851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.623 [2024-11-20 07:10:46.648957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.623 [2024-11-20 07:10:46.649031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.623 [2024-11-20 07:10:46.649357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:04.623 [2024-11-20 07:10:46.649379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:04.623 [2024-11-20 07:10:46.649707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:04.623 [2024-11-20 07:10:46.649909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:04.623 [2024-11-20 07:10:46.649926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:04.623 [2024-11-20 07:10:46.650150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.623 "name": "raid_bdev1", 00:14:04.623 "uuid": "4b6881ba-b883-4d5e-a1e5-4cfa733541de", 00:14:04.623 "strip_size_kb": 64, 00:14:04.623 "state": "online", 00:14:04.623 "raid_level": "raid0", 00:14:04.623 "superblock": true, 00:14:04.623 "num_base_bdevs": 4, 00:14:04.623 "num_base_bdevs_discovered": 4, 00:14:04.623 "num_base_bdevs_operational": 4, 00:14:04.623 "base_bdevs_list": [ 00:14:04.623 { 00:14:04.623 "name": "BaseBdev1", 00:14:04.623 "uuid": "686e070a-941f-55ee-a543-33a36ff59a5a", 00:14:04.623 "is_configured": true, 00:14:04.623 "data_offset": 2048, 00:14:04.623 "data_size": 63488 00:14:04.623 }, 00:14:04.623 { 00:14:04.623 "name": "BaseBdev2", 00:14:04.623 "uuid": "f6f61c86-307c-5584-a863-71c28d8ad937", 00:14:04.623 "is_configured": true, 00:14:04.623 "data_offset": 2048, 00:14:04.623 "data_size": 63488 00:14:04.623 }, 00:14:04.623 { 00:14:04.623 "name": "BaseBdev3", 00:14:04.623 "uuid": "223fe7d9-1c34-5fb6-8050-e26d0364f3a6", 00:14:04.623 "is_configured": true, 00:14:04.623 "data_offset": 2048, 00:14:04.623 "data_size": 63488 00:14:04.623 }, 00:14:04.623 { 00:14:04.623 "name": "BaseBdev4", 00:14:04.623 "uuid": "82fb638d-0a0d-5d4f-a967-6b020aaea9bf", 00:14:04.623 "is_configured": true, 00:14:04.623 "data_offset": 2048, 00:14:04.623 "data_size": 63488 00:14:04.623 } 00:14:04.623 ] 00:14:04.623 }' 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.623 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.883 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:04.883 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.142 [2024-11-20 07:10:47.191341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:06.080 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.081 "name": "raid_bdev1", 00:14:06.081 "uuid": "4b6881ba-b883-4d5e-a1e5-4cfa733541de", 00:14:06.081 "strip_size_kb": 64, 00:14:06.081 "state": "online", 00:14:06.081 "raid_level": "raid0", 00:14:06.081 "superblock": true, 00:14:06.081 "num_base_bdevs": 4, 00:14:06.081 "num_base_bdevs_discovered": 4, 00:14:06.081 "num_base_bdevs_operational": 4, 00:14:06.081 "base_bdevs_list": [ 00:14:06.081 { 00:14:06.081 "name": "BaseBdev1", 00:14:06.081 "uuid": "686e070a-941f-55ee-a543-33a36ff59a5a", 00:14:06.081 "is_configured": true, 00:14:06.081 "data_offset": 2048, 00:14:06.081 "data_size": 63488 00:14:06.081 }, 00:14:06.081 { 00:14:06.081 "name": "BaseBdev2", 00:14:06.081 "uuid": "f6f61c86-307c-5584-a863-71c28d8ad937", 00:14:06.081 "is_configured": true, 00:14:06.081 "data_offset": 2048, 00:14:06.081 "data_size": 63488 00:14:06.081 }, 00:14:06.081 { 00:14:06.081 "name": "BaseBdev3", 00:14:06.081 "uuid": "223fe7d9-1c34-5fb6-8050-e26d0364f3a6", 00:14:06.081 "is_configured": true, 00:14:06.081 "data_offset": 2048, 00:14:06.081 "data_size": 63488 00:14:06.081 }, 00:14:06.081 { 00:14:06.081 "name": "BaseBdev4", 00:14:06.081 "uuid": "82fb638d-0a0d-5d4f-a967-6b020aaea9bf", 00:14:06.081 "is_configured": true, 00:14:06.081 "data_offset": 2048, 00:14:06.081 "data_size": 63488 00:14:06.081 } 00:14:06.081 ] 00:14:06.081 }' 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.081 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.341 [2024-11-20 07:10:48.538120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.341 [2024-11-20 07:10:48.538170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.341 [2024-11-20 07:10:48.541387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.341 [2024-11-20 07:10:48.541473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.341 [2024-11-20 07:10:48.541528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.341 [2024-11-20 07:10:48.541542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71308 00:14:06.341 { 00:14:06.341 "results": [ 00:14:06.341 { 00:14:06.341 "job": "raid_bdev1", 00:14:06.341 "core_mask": "0x1", 00:14:06.341 "workload": "randrw", 00:14:06.341 "percentage": 50, 00:14:06.341 "status": "finished", 00:14:06.341 "queue_depth": 1, 00:14:06.341 "io_size": 131072, 00:14:06.341 "runtime": 1.347229, 00:14:06.341 "iops": 14537.988716098005, 00:14:06.341 "mibps": 1817.2485895122506, 00:14:06.341 "io_failed": 1, 00:14:06.341 "io_timeout": 0, 00:14:06.341 "avg_latency_us": 95.62994081048767, 00:14:06.341 "min_latency_us": 26.941484716157206, 00:14:06.341 "max_latency_us": 1709.9458515283843 00:14:06.341 } 00:14:06.341 ], 00:14:06.341 "core_count": 1 00:14:06.341 } 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71308 ']' 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71308 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71308 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.341 killing process with pid 71308 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71308' 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71308 00:14:06.341 07:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71308 00:14:06.341 [2024-11-20 07:10:48.594624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.910 [2024-11-20 07:10:48.946183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YeUHarTGm1 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:08.291 00:14:08.291 real 0m4.813s 00:14:08.291 user 0m5.633s 00:14:08.291 sys 0m0.609s 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.291 07:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 ************************************ 00:14:08.291 END TEST raid_read_error_test 00:14:08.291 ************************************ 00:14:08.291 07:10:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:08.291 07:10:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.291 07:10:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.291 07:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 ************************************ 00:14:08.291 START TEST raid_write_error_test 00:14:08.291 ************************************ 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lYWfNQ0Qfh 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71458 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71458 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71458 ']' 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.291 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 [2024-11-20 07:10:50.367948] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:08.291 [2024-11-20 07:10:50.368067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71458 ] 00:14:08.291 [2024-11-20 07:10:50.544511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.551 [2024-11-20 07:10:50.665684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.811 [2024-11-20 07:10:50.884614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.811 [2024-11-20 07:10:50.884662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.071 BaseBdev1_malloc 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.071 true 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.071 [2024-11-20 07:10:51.288719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:09.071 [2024-11-20 07:10:51.288771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.071 [2024-11-20 07:10:51.288789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:09.071 [2024-11-20 07:10:51.288799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.071 [2024-11-20 07:10:51.290902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.071 [2024-11-20 07:10:51.290939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.071 BaseBdev1 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.071 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.331 BaseBdev2_malloc 00:14:09.331 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.331 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:09.331 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.331 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.331 true 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 [2024-11-20 07:10:51.358818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:09.332 [2024-11-20 07:10:51.358869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.332 [2024-11-20 07:10:51.358884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:09.332 [2024-11-20 07:10:51.358894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.332 [2024-11-20 07:10:51.360967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.332 [2024-11-20 07:10:51.361003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.332 BaseBdev2 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 BaseBdev3_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 true 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 [2024-11-20 07:10:51.447325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:09.332 [2024-11-20 07:10:51.447385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.332 [2024-11-20 07:10:51.447403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:09.332 [2024-11-20 07:10:51.447415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.332 [2024-11-20 07:10:51.449740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.332 [2024-11-20 07:10:51.449781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:09.332 BaseBdev3 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 BaseBdev4_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 true 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 [2024-11-20 07:10:51.518239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:09.332 [2024-11-20 07:10:51.518315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.332 [2024-11-20 07:10:51.518351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:09.332 [2024-11-20 07:10:51.518364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.332 [2024-11-20 07:10:51.520788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.332 [2024-11-20 07:10:51.520835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:09.332 BaseBdev4 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 [2024-11-20 07:10:51.530262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.332 [2024-11-20 07:10:51.532079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.332 [2024-11-20 07:10:51.532159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.332 [2024-11-20 07:10:51.532231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:09.332 [2024-11-20 07:10:51.532501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:09.332 [2024-11-20 07:10:51.532528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:09.332 [2024-11-20 07:10:51.532815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:09.332 [2024-11-20 07:10:51.533006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:09.332 [2024-11-20 07:10:51.533024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:09.332 [2024-11-20 07:10:51.533206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.332 "name": "raid_bdev1", 00:14:09.332 "uuid": "c2fa5af6-c9a9-45ff-bedb-01c2f6460227", 00:14:09.332 "strip_size_kb": 64, 00:14:09.332 "state": "online", 00:14:09.332 "raid_level": "raid0", 00:14:09.332 "superblock": true, 00:14:09.332 "num_base_bdevs": 4, 00:14:09.332 "num_base_bdevs_discovered": 4, 00:14:09.332 "num_base_bdevs_operational": 4, 00:14:09.332 "base_bdevs_list": [ 00:14:09.332 { 00:14:09.332 "name": "BaseBdev1", 00:14:09.332 "uuid": "e0eeded7-cff7-562e-abf8-bb18e980612b", 00:14:09.332 "is_configured": true, 00:14:09.332 "data_offset": 2048, 00:14:09.332 "data_size": 63488 00:14:09.332 }, 00:14:09.332 { 00:14:09.332 "name": "BaseBdev2", 00:14:09.332 "uuid": "4c72f26a-d776-5867-a298-e6be5d37493e", 00:14:09.332 "is_configured": true, 00:14:09.332 "data_offset": 2048, 00:14:09.332 "data_size": 63488 00:14:09.332 }, 00:14:09.332 { 00:14:09.332 "name": "BaseBdev3", 00:14:09.332 "uuid": "c833fdc1-992b-5ba3-a629-30ed5fc166d2", 00:14:09.332 "is_configured": true, 00:14:09.332 "data_offset": 2048, 00:14:09.332 "data_size": 63488 00:14:09.332 }, 00:14:09.332 { 00:14:09.332 "name": "BaseBdev4", 00:14:09.332 "uuid": "b15e1343-eb7d-588a-bd65-84756913038a", 00:14:09.332 "is_configured": true, 00:14:09.332 "data_offset": 2048, 00:14:09.332 "data_size": 63488 00:14:09.332 } 00:14:09.332 ] 00:14:09.332 }' 00:14:09.332 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.333 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.901 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:09.901 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:09.901 [2024-11-20 07:10:52.126802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.838 "name": "raid_bdev1", 00:14:10.838 "uuid": "c2fa5af6-c9a9-45ff-bedb-01c2f6460227", 00:14:10.838 "strip_size_kb": 64, 00:14:10.838 "state": "online", 00:14:10.838 "raid_level": "raid0", 00:14:10.838 "superblock": true, 00:14:10.838 "num_base_bdevs": 4, 00:14:10.838 "num_base_bdevs_discovered": 4, 00:14:10.838 "num_base_bdevs_operational": 4, 00:14:10.838 "base_bdevs_list": [ 00:14:10.838 { 00:14:10.838 "name": "BaseBdev1", 00:14:10.838 "uuid": "e0eeded7-cff7-562e-abf8-bb18e980612b", 00:14:10.838 "is_configured": true, 00:14:10.838 "data_offset": 2048, 00:14:10.838 "data_size": 63488 00:14:10.838 }, 00:14:10.838 { 00:14:10.838 "name": "BaseBdev2", 00:14:10.838 "uuid": "4c72f26a-d776-5867-a298-e6be5d37493e", 00:14:10.838 "is_configured": true, 00:14:10.838 "data_offset": 2048, 00:14:10.838 "data_size": 63488 00:14:10.838 }, 00:14:10.838 { 00:14:10.838 "name": "BaseBdev3", 00:14:10.838 "uuid": "c833fdc1-992b-5ba3-a629-30ed5fc166d2", 00:14:10.838 "is_configured": true, 00:14:10.838 "data_offset": 2048, 00:14:10.838 "data_size": 63488 00:14:10.838 }, 00:14:10.838 { 00:14:10.838 "name": "BaseBdev4", 00:14:10.838 "uuid": "b15e1343-eb7d-588a-bd65-84756913038a", 00:14:10.838 "is_configured": true, 00:14:10.838 "data_offset": 2048, 00:14:10.838 "data_size": 63488 00:14:10.838 } 00:14:10.838 ] 00:14:10.838 }' 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.838 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.406 [2024-11-20 07:10:53.507706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.406 [2024-11-20 07:10:53.507747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.406 [2024-11-20 07:10:53.510923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.406 [2024-11-20 07:10:53.510992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.406 [2024-11-20 07:10:53.511043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.406 [2024-11-20 07:10:53.511062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:11.406 { 00:14:11.406 "results": [ 00:14:11.406 { 00:14:11.406 "job": "raid_bdev1", 00:14:11.406 "core_mask": "0x1", 00:14:11.406 "workload": "randrw", 00:14:11.406 "percentage": 50, 00:14:11.406 "status": "finished", 00:14:11.406 "queue_depth": 1, 00:14:11.406 "io_size": 131072, 00:14:11.406 "runtime": 1.381615, 00:14:11.406 "iops": 14036.471810164192, 00:14:11.406 "mibps": 1754.558976270524, 00:14:11.406 "io_failed": 1, 00:14:11.406 "io_timeout": 0, 00:14:11.406 "avg_latency_us": 98.96852085437669, 00:14:11.406 "min_latency_us": 26.717903930131005, 00:14:11.406 "max_latency_us": 1745.7187772925763 00:14:11.406 } 00:14:11.406 ], 00:14:11.406 "core_count": 1 00:14:11.406 } 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71458 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71458 ']' 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71458 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71458 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71458' 00:14:11.406 killing process with pid 71458 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71458 00:14:11.406 [2024-11-20 07:10:53.561139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.406 07:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71458 00:14:11.972 [2024-11-20 07:10:53.934115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lYWfNQ0Qfh 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:13.355 ************************************ 00:14:13.355 END TEST raid_write_error_test 00:14:13.355 ************************************ 00:14:13.355 00:14:13.355 real 0m4.956s 00:14:13.355 user 0m5.853s 00:14:13.355 sys 0m0.638s 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.355 07:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.355 07:10:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:13.355 07:10:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:13.355 07:10:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:13.355 07:10:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.355 07:10:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.355 ************************************ 00:14:13.355 START TEST raid_state_function_test 00:14:13.355 ************************************ 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71603 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71603' 00:14:13.355 Process raid pid: 71603 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71603 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71603 ']' 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.355 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.355 [2024-11-20 07:10:55.380606] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:13.355 [2024-11-20 07:10:55.380840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.355 [2024-11-20 07:10:55.560026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.622 [2024-11-20 07:10:55.687045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.879 [2024-11-20 07:10:55.918084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.879 [2024-11-20 07:10:55.918190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.138 [2024-11-20 07:10:56.257304] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.138 [2024-11-20 07:10:56.257458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.138 [2024-11-20 07:10:56.257490] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.138 [2024-11-20 07:10:56.257515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.138 [2024-11-20 07:10:56.257534] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.138 [2024-11-20 07:10:56.257555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.138 [2024-11-20 07:10:56.257573] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.138 [2024-11-20 07:10:56.257610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.138 "name": "Existed_Raid", 00:14:14.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.138 "strip_size_kb": 64, 00:14:14.138 "state": "configuring", 00:14:14.138 "raid_level": "concat", 00:14:14.138 "superblock": false, 00:14:14.138 "num_base_bdevs": 4, 00:14:14.138 "num_base_bdevs_discovered": 0, 00:14:14.138 "num_base_bdevs_operational": 4, 00:14:14.138 "base_bdevs_list": [ 00:14:14.138 { 00:14:14.138 "name": "BaseBdev1", 00:14:14.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.138 "is_configured": false, 00:14:14.138 "data_offset": 0, 00:14:14.138 "data_size": 0 00:14:14.138 }, 00:14:14.138 { 00:14:14.138 "name": "BaseBdev2", 00:14:14.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.138 "is_configured": false, 00:14:14.138 "data_offset": 0, 00:14:14.138 "data_size": 0 00:14:14.138 }, 00:14:14.138 { 00:14:14.138 "name": "BaseBdev3", 00:14:14.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.138 "is_configured": false, 00:14:14.138 "data_offset": 0, 00:14:14.138 "data_size": 0 00:14:14.138 }, 00:14:14.138 { 00:14:14.138 "name": "BaseBdev4", 00:14:14.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.138 "is_configured": false, 00:14:14.138 "data_offset": 0, 00:14:14.138 "data_size": 0 00:14:14.138 } 00:14:14.138 ] 00:14:14.138 }' 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.138 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.706 [2024-11-20 07:10:56.732445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.706 [2024-11-20 07:10:56.732544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.706 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.706 [2024-11-20 07:10:56.740411] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.707 [2024-11-20 07:10:56.740489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.707 [2024-11-20 07:10:56.740540] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.707 [2024-11-20 07:10:56.740574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.707 [2024-11-20 07:10:56.740604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.707 [2024-11-20 07:10:56.740653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.707 [2024-11-20 07:10:56.740685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.707 [2024-11-20 07:10:56.740718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.707 [2024-11-20 07:10:56.790705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.707 BaseBdev1 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.707 [ 00:14:14.707 { 00:14:14.707 "name": "BaseBdev1", 00:14:14.707 "aliases": [ 00:14:14.707 "52709560-d42c-49b1-828d-02dd8921b7b9" 00:14:14.707 ], 00:14:14.707 "product_name": "Malloc disk", 00:14:14.707 "block_size": 512, 00:14:14.707 "num_blocks": 65536, 00:14:14.707 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:14.707 "assigned_rate_limits": { 00:14:14.707 "rw_ios_per_sec": 0, 00:14:14.707 "rw_mbytes_per_sec": 0, 00:14:14.707 "r_mbytes_per_sec": 0, 00:14:14.707 "w_mbytes_per_sec": 0 00:14:14.707 }, 00:14:14.707 "claimed": true, 00:14:14.707 "claim_type": "exclusive_write", 00:14:14.707 "zoned": false, 00:14:14.707 "supported_io_types": { 00:14:14.707 "read": true, 00:14:14.707 "write": true, 00:14:14.707 "unmap": true, 00:14:14.707 "flush": true, 00:14:14.707 "reset": true, 00:14:14.707 "nvme_admin": false, 00:14:14.707 "nvme_io": false, 00:14:14.707 "nvme_io_md": false, 00:14:14.707 "write_zeroes": true, 00:14:14.707 "zcopy": true, 00:14:14.707 "get_zone_info": false, 00:14:14.707 "zone_management": false, 00:14:14.707 "zone_append": false, 00:14:14.707 "compare": false, 00:14:14.707 "compare_and_write": false, 00:14:14.707 "abort": true, 00:14:14.707 "seek_hole": false, 00:14:14.707 "seek_data": false, 00:14:14.707 "copy": true, 00:14:14.707 "nvme_iov_md": false 00:14:14.707 }, 00:14:14.707 "memory_domains": [ 00:14:14.707 { 00:14:14.707 "dma_device_id": "system", 00:14:14.707 "dma_device_type": 1 00:14:14.707 }, 00:14:14.707 { 00:14:14.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.707 "dma_device_type": 2 00:14:14.707 } 00:14:14.707 ], 00:14:14.707 "driver_specific": {} 00:14:14.707 } 00:14:14.707 ] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.707 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.707 "name": "Existed_Raid", 00:14:14.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.707 "strip_size_kb": 64, 00:14:14.707 "state": "configuring", 00:14:14.707 "raid_level": "concat", 00:14:14.707 "superblock": false, 00:14:14.707 "num_base_bdevs": 4, 00:14:14.707 "num_base_bdevs_discovered": 1, 00:14:14.707 "num_base_bdevs_operational": 4, 00:14:14.707 "base_bdevs_list": [ 00:14:14.707 { 00:14:14.707 "name": "BaseBdev1", 00:14:14.707 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:14.707 "is_configured": true, 00:14:14.707 "data_offset": 0, 00:14:14.707 "data_size": 65536 00:14:14.708 }, 00:14:14.708 { 00:14:14.708 "name": "BaseBdev2", 00:14:14.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.708 "is_configured": false, 00:14:14.708 "data_offset": 0, 00:14:14.708 "data_size": 0 00:14:14.708 }, 00:14:14.708 { 00:14:14.708 "name": "BaseBdev3", 00:14:14.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.708 "is_configured": false, 00:14:14.708 "data_offset": 0, 00:14:14.708 "data_size": 0 00:14:14.708 }, 00:14:14.708 { 00:14:14.708 "name": "BaseBdev4", 00:14:14.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.708 "is_configured": false, 00:14:14.708 "data_offset": 0, 00:14:14.708 "data_size": 0 00:14:14.708 } 00:14:14.708 ] 00:14:14.708 }' 00:14:14.708 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.708 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.275 [2024-11-20 07:10:57.246012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.275 [2024-11-20 07:10:57.246077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.275 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.275 [2024-11-20 07:10:57.254054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.275 [2024-11-20 07:10:57.256163] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.275 [2024-11-20 07:10:57.256254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.275 [2024-11-20 07:10:57.256269] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.275 [2024-11-20 07:10:57.256282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.276 [2024-11-20 07:10:57.256290] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:15.276 [2024-11-20 07:10:57.256299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.276 "name": "Existed_Raid", 00:14:15.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.276 "strip_size_kb": 64, 00:14:15.276 "state": "configuring", 00:14:15.276 "raid_level": "concat", 00:14:15.276 "superblock": false, 00:14:15.276 "num_base_bdevs": 4, 00:14:15.276 "num_base_bdevs_discovered": 1, 00:14:15.276 "num_base_bdevs_operational": 4, 00:14:15.276 "base_bdevs_list": [ 00:14:15.276 { 00:14:15.276 "name": "BaseBdev1", 00:14:15.276 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:15.276 "is_configured": true, 00:14:15.276 "data_offset": 0, 00:14:15.276 "data_size": 65536 00:14:15.276 }, 00:14:15.276 { 00:14:15.276 "name": "BaseBdev2", 00:14:15.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.276 "is_configured": false, 00:14:15.276 "data_offset": 0, 00:14:15.276 "data_size": 0 00:14:15.276 }, 00:14:15.276 { 00:14:15.276 "name": "BaseBdev3", 00:14:15.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.276 "is_configured": false, 00:14:15.276 "data_offset": 0, 00:14:15.276 "data_size": 0 00:14:15.276 }, 00:14:15.276 { 00:14:15.276 "name": "BaseBdev4", 00:14:15.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.276 "is_configured": false, 00:14:15.276 "data_offset": 0, 00:14:15.276 "data_size": 0 00:14:15.276 } 00:14:15.276 ] 00:14:15.276 }' 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.276 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.535 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.535 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.794 [2024-11-20 07:10:57.813013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.794 BaseBdev2 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.794 [ 00:14:15.794 { 00:14:15.794 "name": "BaseBdev2", 00:14:15.794 "aliases": [ 00:14:15.794 "eabfc09b-31d2-4eb6-84b8-55b210135fbb" 00:14:15.794 ], 00:14:15.794 "product_name": "Malloc disk", 00:14:15.794 "block_size": 512, 00:14:15.794 "num_blocks": 65536, 00:14:15.794 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:15.794 "assigned_rate_limits": { 00:14:15.794 "rw_ios_per_sec": 0, 00:14:15.794 "rw_mbytes_per_sec": 0, 00:14:15.794 "r_mbytes_per_sec": 0, 00:14:15.794 "w_mbytes_per_sec": 0 00:14:15.794 }, 00:14:15.794 "claimed": true, 00:14:15.794 "claim_type": "exclusive_write", 00:14:15.794 "zoned": false, 00:14:15.794 "supported_io_types": { 00:14:15.794 "read": true, 00:14:15.794 "write": true, 00:14:15.794 "unmap": true, 00:14:15.794 "flush": true, 00:14:15.794 "reset": true, 00:14:15.794 "nvme_admin": false, 00:14:15.794 "nvme_io": false, 00:14:15.794 "nvme_io_md": false, 00:14:15.794 "write_zeroes": true, 00:14:15.794 "zcopy": true, 00:14:15.794 "get_zone_info": false, 00:14:15.794 "zone_management": false, 00:14:15.794 "zone_append": false, 00:14:15.794 "compare": false, 00:14:15.794 "compare_and_write": false, 00:14:15.794 "abort": true, 00:14:15.794 "seek_hole": false, 00:14:15.794 "seek_data": false, 00:14:15.794 "copy": true, 00:14:15.794 "nvme_iov_md": false 00:14:15.794 }, 00:14:15.794 "memory_domains": [ 00:14:15.794 { 00:14:15.794 "dma_device_id": "system", 00:14:15.794 "dma_device_type": 1 00:14:15.794 }, 00:14:15.794 { 00:14:15.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.794 "dma_device_type": 2 00:14:15.794 } 00:14:15.794 ], 00:14:15.794 "driver_specific": {} 00:14:15.794 } 00:14:15.794 ] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.794 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.794 "name": "Existed_Raid", 00:14:15.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.794 "strip_size_kb": 64, 00:14:15.794 "state": "configuring", 00:14:15.794 "raid_level": "concat", 00:14:15.794 "superblock": false, 00:14:15.794 "num_base_bdevs": 4, 00:14:15.794 "num_base_bdevs_discovered": 2, 00:14:15.794 "num_base_bdevs_operational": 4, 00:14:15.794 "base_bdevs_list": [ 00:14:15.794 { 00:14:15.794 "name": "BaseBdev1", 00:14:15.794 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:15.794 "is_configured": true, 00:14:15.794 "data_offset": 0, 00:14:15.794 "data_size": 65536 00:14:15.794 }, 00:14:15.794 { 00:14:15.794 "name": "BaseBdev2", 00:14:15.794 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:15.794 "is_configured": true, 00:14:15.795 "data_offset": 0, 00:14:15.795 "data_size": 65536 00:14:15.795 }, 00:14:15.795 { 00:14:15.795 "name": "BaseBdev3", 00:14:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.795 "is_configured": false, 00:14:15.795 "data_offset": 0, 00:14:15.795 "data_size": 0 00:14:15.795 }, 00:14:15.795 { 00:14:15.795 "name": "BaseBdev4", 00:14:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.795 "is_configured": false, 00:14:15.795 "data_offset": 0, 00:14:15.795 "data_size": 0 00:14:15.795 } 00:14:15.795 ] 00:14:15.795 }' 00:14:15.795 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.795 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.054 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:16.054 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.054 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.313 [2024-11-20 07:10:58.359099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.313 BaseBdev3 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.313 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.313 [ 00:14:16.313 { 00:14:16.313 "name": "BaseBdev3", 00:14:16.313 "aliases": [ 00:14:16.313 "4becd994-c0d3-42f2-b072-06d86a1e09f2" 00:14:16.313 ], 00:14:16.313 "product_name": "Malloc disk", 00:14:16.313 "block_size": 512, 00:14:16.313 "num_blocks": 65536, 00:14:16.313 "uuid": "4becd994-c0d3-42f2-b072-06d86a1e09f2", 00:14:16.313 "assigned_rate_limits": { 00:14:16.313 "rw_ios_per_sec": 0, 00:14:16.313 "rw_mbytes_per_sec": 0, 00:14:16.313 "r_mbytes_per_sec": 0, 00:14:16.313 "w_mbytes_per_sec": 0 00:14:16.313 }, 00:14:16.313 "claimed": true, 00:14:16.313 "claim_type": "exclusive_write", 00:14:16.313 "zoned": false, 00:14:16.313 "supported_io_types": { 00:14:16.313 "read": true, 00:14:16.313 "write": true, 00:14:16.313 "unmap": true, 00:14:16.313 "flush": true, 00:14:16.313 "reset": true, 00:14:16.313 "nvme_admin": false, 00:14:16.313 "nvme_io": false, 00:14:16.313 "nvme_io_md": false, 00:14:16.313 "write_zeroes": true, 00:14:16.313 "zcopy": true, 00:14:16.313 "get_zone_info": false, 00:14:16.313 "zone_management": false, 00:14:16.313 "zone_append": false, 00:14:16.313 "compare": false, 00:14:16.313 "compare_and_write": false, 00:14:16.313 "abort": true, 00:14:16.313 "seek_hole": false, 00:14:16.313 "seek_data": false, 00:14:16.314 "copy": true, 00:14:16.314 "nvme_iov_md": false 00:14:16.314 }, 00:14:16.314 "memory_domains": [ 00:14:16.314 { 00:14:16.314 "dma_device_id": "system", 00:14:16.314 "dma_device_type": 1 00:14:16.314 }, 00:14:16.314 { 00:14:16.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.314 "dma_device_type": 2 00:14:16.314 } 00:14:16.314 ], 00:14:16.314 "driver_specific": {} 00:14:16.314 } 00:14:16.314 ] 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.314 "name": "Existed_Raid", 00:14:16.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.314 "strip_size_kb": 64, 00:14:16.314 "state": "configuring", 00:14:16.314 "raid_level": "concat", 00:14:16.314 "superblock": false, 00:14:16.314 "num_base_bdevs": 4, 00:14:16.314 "num_base_bdevs_discovered": 3, 00:14:16.314 "num_base_bdevs_operational": 4, 00:14:16.314 "base_bdevs_list": [ 00:14:16.314 { 00:14:16.314 "name": "BaseBdev1", 00:14:16.314 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:16.314 "is_configured": true, 00:14:16.314 "data_offset": 0, 00:14:16.314 "data_size": 65536 00:14:16.314 }, 00:14:16.314 { 00:14:16.314 "name": "BaseBdev2", 00:14:16.314 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:16.314 "is_configured": true, 00:14:16.314 "data_offset": 0, 00:14:16.314 "data_size": 65536 00:14:16.314 }, 00:14:16.314 { 00:14:16.314 "name": "BaseBdev3", 00:14:16.314 "uuid": "4becd994-c0d3-42f2-b072-06d86a1e09f2", 00:14:16.314 "is_configured": true, 00:14:16.314 "data_offset": 0, 00:14:16.314 "data_size": 65536 00:14:16.314 }, 00:14:16.314 { 00:14:16.314 "name": "BaseBdev4", 00:14:16.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.314 "is_configured": false, 00:14:16.314 "data_offset": 0, 00:14:16.314 "data_size": 0 00:14:16.314 } 00:14:16.314 ] 00:14:16.314 }' 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.314 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.572 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:16.573 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.573 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 [2024-11-20 07:10:58.847561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.832 [2024-11-20 07:10:58.847697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:16.832 [2024-11-20 07:10:58.847723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:16.832 [2024-11-20 07:10:58.848063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.832 [2024-11-20 07:10:58.848288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:16.832 [2024-11-20 07:10:58.848348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:16.832 [2024-11-20 07:10:58.848704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.832 BaseBdev4 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 [ 00:14:16.832 { 00:14:16.832 "name": "BaseBdev4", 00:14:16.832 "aliases": [ 00:14:16.832 "b04ac765-f585-400d-8d45-6e8430f8b83d" 00:14:16.832 ], 00:14:16.832 "product_name": "Malloc disk", 00:14:16.832 "block_size": 512, 00:14:16.832 "num_blocks": 65536, 00:14:16.832 "uuid": "b04ac765-f585-400d-8d45-6e8430f8b83d", 00:14:16.832 "assigned_rate_limits": { 00:14:16.832 "rw_ios_per_sec": 0, 00:14:16.832 "rw_mbytes_per_sec": 0, 00:14:16.832 "r_mbytes_per_sec": 0, 00:14:16.832 "w_mbytes_per_sec": 0 00:14:16.832 }, 00:14:16.832 "claimed": true, 00:14:16.832 "claim_type": "exclusive_write", 00:14:16.832 "zoned": false, 00:14:16.832 "supported_io_types": { 00:14:16.832 "read": true, 00:14:16.832 "write": true, 00:14:16.832 "unmap": true, 00:14:16.832 "flush": true, 00:14:16.832 "reset": true, 00:14:16.832 "nvme_admin": false, 00:14:16.832 "nvme_io": false, 00:14:16.832 "nvme_io_md": false, 00:14:16.832 "write_zeroes": true, 00:14:16.832 "zcopy": true, 00:14:16.832 "get_zone_info": false, 00:14:16.832 "zone_management": false, 00:14:16.832 "zone_append": false, 00:14:16.832 "compare": false, 00:14:16.832 "compare_and_write": false, 00:14:16.832 "abort": true, 00:14:16.832 "seek_hole": false, 00:14:16.832 "seek_data": false, 00:14:16.832 "copy": true, 00:14:16.832 "nvme_iov_md": false 00:14:16.832 }, 00:14:16.832 "memory_domains": [ 00:14:16.832 { 00:14:16.832 "dma_device_id": "system", 00:14:16.832 "dma_device_type": 1 00:14:16.832 }, 00:14:16.832 { 00:14:16.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.832 "dma_device_type": 2 00:14:16.832 } 00:14:16.832 ], 00:14:16.832 "driver_specific": {} 00:14:16.832 } 00:14:16.832 ] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.832 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.832 "name": "Existed_Raid", 00:14:16.832 "uuid": "227de939-384d-4746-a987-24547051464f", 00:14:16.832 "strip_size_kb": 64, 00:14:16.832 "state": "online", 00:14:16.832 "raid_level": "concat", 00:14:16.832 "superblock": false, 00:14:16.832 "num_base_bdevs": 4, 00:14:16.832 "num_base_bdevs_discovered": 4, 00:14:16.832 "num_base_bdevs_operational": 4, 00:14:16.832 "base_bdevs_list": [ 00:14:16.832 { 00:14:16.832 "name": "BaseBdev1", 00:14:16.832 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:16.832 "is_configured": true, 00:14:16.832 "data_offset": 0, 00:14:16.832 "data_size": 65536 00:14:16.832 }, 00:14:16.832 { 00:14:16.832 "name": "BaseBdev2", 00:14:16.832 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:16.832 "is_configured": true, 00:14:16.832 "data_offset": 0, 00:14:16.832 "data_size": 65536 00:14:16.832 }, 00:14:16.832 { 00:14:16.832 "name": "BaseBdev3", 00:14:16.832 "uuid": "4becd994-c0d3-42f2-b072-06d86a1e09f2", 00:14:16.832 "is_configured": true, 00:14:16.833 "data_offset": 0, 00:14:16.833 "data_size": 65536 00:14:16.833 }, 00:14:16.833 { 00:14:16.833 "name": "BaseBdev4", 00:14:16.833 "uuid": "b04ac765-f585-400d-8d45-6e8430f8b83d", 00:14:16.833 "is_configured": true, 00:14:16.833 "data_offset": 0, 00:14:16.833 "data_size": 65536 00:14:16.833 } 00:14:16.833 ] 00:14:16.833 }' 00:14:16.833 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.833 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.092 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:17.093 [2024-11-20 07:10:59.343239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.352 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.352 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:17.352 "name": "Existed_Raid", 00:14:17.352 "aliases": [ 00:14:17.352 "227de939-384d-4746-a987-24547051464f" 00:14:17.352 ], 00:14:17.352 "product_name": "Raid Volume", 00:14:17.352 "block_size": 512, 00:14:17.352 "num_blocks": 262144, 00:14:17.352 "uuid": "227de939-384d-4746-a987-24547051464f", 00:14:17.352 "assigned_rate_limits": { 00:14:17.352 "rw_ios_per_sec": 0, 00:14:17.352 "rw_mbytes_per_sec": 0, 00:14:17.352 "r_mbytes_per_sec": 0, 00:14:17.352 "w_mbytes_per_sec": 0 00:14:17.352 }, 00:14:17.352 "claimed": false, 00:14:17.352 "zoned": false, 00:14:17.352 "supported_io_types": { 00:14:17.352 "read": true, 00:14:17.352 "write": true, 00:14:17.352 "unmap": true, 00:14:17.352 "flush": true, 00:14:17.352 "reset": true, 00:14:17.352 "nvme_admin": false, 00:14:17.352 "nvme_io": false, 00:14:17.352 "nvme_io_md": false, 00:14:17.352 "write_zeroes": true, 00:14:17.352 "zcopy": false, 00:14:17.352 "get_zone_info": false, 00:14:17.352 "zone_management": false, 00:14:17.352 "zone_append": false, 00:14:17.352 "compare": false, 00:14:17.352 "compare_and_write": false, 00:14:17.352 "abort": false, 00:14:17.352 "seek_hole": false, 00:14:17.352 "seek_data": false, 00:14:17.352 "copy": false, 00:14:17.352 "nvme_iov_md": false 00:14:17.352 }, 00:14:17.352 "memory_domains": [ 00:14:17.352 { 00:14:17.352 "dma_device_id": "system", 00:14:17.352 "dma_device_type": 1 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.352 "dma_device_type": 2 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "system", 00:14:17.352 "dma_device_type": 1 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.352 "dma_device_type": 2 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "system", 00:14:17.352 "dma_device_type": 1 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.352 "dma_device_type": 2 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "system", 00:14:17.352 "dma_device_type": 1 00:14:17.352 }, 00:14:17.352 { 00:14:17.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.352 "dma_device_type": 2 00:14:17.352 } 00:14:17.352 ], 00:14:17.352 "driver_specific": { 00:14:17.352 "raid": { 00:14:17.352 "uuid": "227de939-384d-4746-a987-24547051464f", 00:14:17.352 "strip_size_kb": 64, 00:14:17.352 "state": "online", 00:14:17.352 "raid_level": "concat", 00:14:17.352 "superblock": false, 00:14:17.352 "num_base_bdevs": 4, 00:14:17.353 "num_base_bdevs_discovered": 4, 00:14:17.353 "num_base_bdevs_operational": 4, 00:14:17.353 "base_bdevs_list": [ 00:14:17.353 { 00:14:17.353 "name": "BaseBdev1", 00:14:17.353 "uuid": "52709560-d42c-49b1-828d-02dd8921b7b9", 00:14:17.353 "is_configured": true, 00:14:17.353 "data_offset": 0, 00:14:17.353 "data_size": 65536 00:14:17.353 }, 00:14:17.353 { 00:14:17.353 "name": "BaseBdev2", 00:14:17.353 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:17.353 "is_configured": true, 00:14:17.353 "data_offset": 0, 00:14:17.353 "data_size": 65536 00:14:17.353 }, 00:14:17.353 { 00:14:17.353 "name": "BaseBdev3", 00:14:17.353 "uuid": "4becd994-c0d3-42f2-b072-06d86a1e09f2", 00:14:17.353 "is_configured": true, 00:14:17.353 "data_offset": 0, 00:14:17.353 "data_size": 65536 00:14:17.353 }, 00:14:17.353 { 00:14:17.353 "name": "BaseBdev4", 00:14:17.353 "uuid": "b04ac765-f585-400d-8d45-6e8430f8b83d", 00:14:17.353 "is_configured": true, 00:14:17.353 "data_offset": 0, 00:14:17.353 "data_size": 65536 00:14:17.353 } 00:14:17.353 ] 00:14:17.353 } 00:14:17.353 } 00:14:17.353 }' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:17.353 BaseBdev2 00:14:17.353 BaseBdev3 00:14:17.353 BaseBdev4' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.613 [2024-11-20 07:10:59.678398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.613 [2024-11-20 07:10:59.678478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.613 [2024-11-20 07:10:59.678557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.613 "name": "Existed_Raid", 00:14:17.613 "uuid": "227de939-384d-4746-a987-24547051464f", 00:14:17.613 "strip_size_kb": 64, 00:14:17.613 "state": "offline", 00:14:17.613 "raid_level": "concat", 00:14:17.613 "superblock": false, 00:14:17.613 "num_base_bdevs": 4, 00:14:17.613 "num_base_bdevs_discovered": 3, 00:14:17.613 "num_base_bdevs_operational": 3, 00:14:17.613 "base_bdevs_list": [ 00:14:17.613 { 00:14:17.613 "name": null, 00:14:17.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.613 "is_configured": false, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev2", 00:14:17.613 "uuid": "eabfc09b-31d2-4eb6-84b8-55b210135fbb", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev3", 00:14:17.613 "uuid": "4becd994-c0d3-42f2-b072-06d86a1e09f2", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev4", 00:14:17.613 "uuid": "b04ac765-f585-400d-8d45-6e8430f8b83d", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 } 00:14:17.613 ] 00:14:17.613 }' 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.613 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.182 [2024-11-20 07:11:00.299295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.182 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.444 [2024-11-20 07:11:00.470949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.444 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.444 [2024-11-20 07:11:00.635801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:18.444 [2024-11-20 07:11:00.635856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.703 BaseBdev2 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.703 [ 00:14:18.703 { 00:14:18.703 "name": "BaseBdev2", 00:14:18.703 "aliases": [ 00:14:18.703 "4e14735d-af55-44f6-a3be-4ad0b0b73da9" 00:14:18.703 ], 00:14:18.703 "product_name": "Malloc disk", 00:14:18.703 "block_size": 512, 00:14:18.703 "num_blocks": 65536, 00:14:18.703 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:18.703 "assigned_rate_limits": { 00:14:18.703 "rw_ios_per_sec": 0, 00:14:18.703 "rw_mbytes_per_sec": 0, 00:14:18.703 "r_mbytes_per_sec": 0, 00:14:18.703 "w_mbytes_per_sec": 0 00:14:18.703 }, 00:14:18.703 "claimed": false, 00:14:18.703 "zoned": false, 00:14:18.703 "supported_io_types": { 00:14:18.703 "read": true, 00:14:18.703 "write": true, 00:14:18.703 "unmap": true, 00:14:18.703 "flush": true, 00:14:18.703 "reset": true, 00:14:18.703 "nvme_admin": false, 00:14:18.703 "nvme_io": false, 00:14:18.703 "nvme_io_md": false, 00:14:18.703 "write_zeroes": true, 00:14:18.703 "zcopy": true, 00:14:18.703 "get_zone_info": false, 00:14:18.703 "zone_management": false, 00:14:18.703 "zone_append": false, 00:14:18.703 "compare": false, 00:14:18.703 "compare_and_write": false, 00:14:18.703 "abort": true, 00:14:18.703 "seek_hole": false, 00:14:18.703 "seek_data": false, 00:14:18.703 "copy": true, 00:14:18.703 "nvme_iov_md": false 00:14:18.703 }, 00:14:18.703 "memory_domains": [ 00:14:18.703 { 00:14:18.703 "dma_device_id": "system", 00:14:18.703 "dma_device_type": 1 00:14:18.703 }, 00:14:18.703 { 00:14:18.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.703 "dma_device_type": 2 00:14:18.703 } 00:14:18.703 ], 00:14:18.703 "driver_specific": {} 00:14:18.703 } 00:14:18.703 ] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.703 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.704 BaseBdev3 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.704 [ 00:14:18.704 { 00:14:18.704 "name": "BaseBdev3", 00:14:18.704 "aliases": [ 00:14:18.704 "34f5ef05-307d-4fe9-8628-2e376125fa20" 00:14:18.704 ], 00:14:18.704 "product_name": "Malloc disk", 00:14:18.704 "block_size": 512, 00:14:18.704 "num_blocks": 65536, 00:14:18.704 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:18.704 "assigned_rate_limits": { 00:14:18.704 "rw_ios_per_sec": 0, 00:14:18.704 "rw_mbytes_per_sec": 0, 00:14:18.704 "r_mbytes_per_sec": 0, 00:14:18.704 "w_mbytes_per_sec": 0 00:14:18.704 }, 00:14:18.704 "claimed": false, 00:14:18.704 "zoned": false, 00:14:18.704 "supported_io_types": { 00:14:18.704 "read": true, 00:14:18.704 "write": true, 00:14:18.704 "unmap": true, 00:14:18.704 "flush": true, 00:14:18.704 "reset": true, 00:14:18.704 "nvme_admin": false, 00:14:18.704 "nvme_io": false, 00:14:18.704 "nvme_io_md": false, 00:14:18.704 "write_zeroes": true, 00:14:18.704 "zcopy": true, 00:14:18.704 "get_zone_info": false, 00:14:18.704 "zone_management": false, 00:14:18.704 "zone_append": false, 00:14:18.704 "compare": false, 00:14:18.704 "compare_and_write": false, 00:14:18.704 "abort": true, 00:14:18.704 "seek_hole": false, 00:14:18.704 "seek_data": false, 00:14:18.704 "copy": true, 00:14:18.704 "nvme_iov_md": false 00:14:18.704 }, 00:14:18.704 "memory_domains": [ 00:14:18.704 { 00:14:18.704 "dma_device_id": "system", 00:14:18.704 "dma_device_type": 1 00:14:18.704 }, 00:14:18.704 { 00:14:18.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.704 "dma_device_type": 2 00:14:18.704 } 00:14:18.704 ], 00:14:18.704 "driver_specific": {} 00:14:18.704 } 00:14:18.704 ] 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.704 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.963 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.963 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:18.963 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.963 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.963 BaseBdev4 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.963 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.963 [ 00:14:18.963 { 00:14:18.963 "name": "BaseBdev4", 00:14:18.963 "aliases": [ 00:14:18.963 "dbb609ae-5692-4fcd-8ecc-97c8a22a370f" 00:14:18.963 ], 00:14:18.963 "product_name": "Malloc disk", 00:14:18.963 "block_size": 512, 00:14:18.963 "num_blocks": 65536, 00:14:18.964 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:18.964 "assigned_rate_limits": { 00:14:18.964 "rw_ios_per_sec": 0, 00:14:18.964 "rw_mbytes_per_sec": 0, 00:14:18.964 "r_mbytes_per_sec": 0, 00:14:18.964 "w_mbytes_per_sec": 0 00:14:18.964 }, 00:14:18.964 "claimed": false, 00:14:18.964 "zoned": false, 00:14:18.964 "supported_io_types": { 00:14:18.964 "read": true, 00:14:18.964 "write": true, 00:14:18.964 "unmap": true, 00:14:18.964 "flush": true, 00:14:18.964 "reset": true, 00:14:18.964 "nvme_admin": false, 00:14:18.964 "nvme_io": false, 00:14:18.964 "nvme_io_md": false, 00:14:18.964 "write_zeroes": true, 00:14:18.964 "zcopy": true, 00:14:18.964 "get_zone_info": false, 00:14:18.964 "zone_management": false, 00:14:18.964 "zone_append": false, 00:14:18.964 "compare": false, 00:14:18.964 "compare_and_write": false, 00:14:18.964 "abort": true, 00:14:18.964 "seek_hole": false, 00:14:18.964 "seek_data": false, 00:14:18.964 "copy": true, 00:14:18.964 "nvme_iov_md": false 00:14:18.964 }, 00:14:18.964 "memory_domains": [ 00:14:18.964 { 00:14:18.964 "dma_device_id": "system", 00:14:18.964 "dma_device_type": 1 00:14:18.964 }, 00:14:18.964 { 00:14:18.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.964 "dma_device_type": 2 00:14:18.964 } 00:14:18.964 ], 00:14:18.964 "driver_specific": {} 00:14:18.964 } 00:14:18.964 ] 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.964 [2024-11-20 07:11:01.060713] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.964 [2024-11-20 07:11:01.060761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.964 [2024-11-20 07:11:01.060785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.964 [2024-11-20 07:11:01.062806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.964 [2024-11-20 07:11:01.062917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.964 "name": "Existed_Raid", 00:14:18.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.964 "strip_size_kb": 64, 00:14:18.964 "state": "configuring", 00:14:18.964 "raid_level": "concat", 00:14:18.964 "superblock": false, 00:14:18.964 "num_base_bdevs": 4, 00:14:18.964 "num_base_bdevs_discovered": 3, 00:14:18.964 "num_base_bdevs_operational": 4, 00:14:18.964 "base_bdevs_list": [ 00:14:18.964 { 00:14:18.964 "name": "BaseBdev1", 00:14:18.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.964 "is_configured": false, 00:14:18.964 "data_offset": 0, 00:14:18.964 "data_size": 0 00:14:18.964 }, 00:14:18.964 { 00:14:18.964 "name": "BaseBdev2", 00:14:18.964 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:18.964 "is_configured": true, 00:14:18.964 "data_offset": 0, 00:14:18.964 "data_size": 65536 00:14:18.964 }, 00:14:18.964 { 00:14:18.964 "name": "BaseBdev3", 00:14:18.964 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:18.964 "is_configured": true, 00:14:18.964 "data_offset": 0, 00:14:18.964 "data_size": 65536 00:14:18.964 }, 00:14:18.964 { 00:14:18.964 "name": "BaseBdev4", 00:14:18.964 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:18.964 "is_configured": true, 00:14:18.964 "data_offset": 0, 00:14:18.964 "data_size": 65536 00:14:18.964 } 00:14:18.964 ] 00:14:18.964 }' 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.964 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.224 [2024-11-20 07:11:01.472054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.224 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.481 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.481 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.481 "name": "Existed_Raid", 00:14:19.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.481 "strip_size_kb": 64, 00:14:19.481 "state": "configuring", 00:14:19.481 "raid_level": "concat", 00:14:19.481 "superblock": false, 00:14:19.481 "num_base_bdevs": 4, 00:14:19.481 "num_base_bdevs_discovered": 2, 00:14:19.481 "num_base_bdevs_operational": 4, 00:14:19.481 "base_bdevs_list": [ 00:14:19.481 { 00:14:19.481 "name": "BaseBdev1", 00:14:19.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.481 "is_configured": false, 00:14:19.481 "data_offset": 0, 00:14:19.481 "data_size": 0 00:14:19.481 }, 00:14:19.481 { 00:14:19.481 "name": null, 00:14:19.481 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:19.481 "is_configured": false, 00:14:19.481 "data_offset": 0, 00:14:19.481 "data_size": 65536 00:14:19.481 }, 00:14:19.482 { 00:14:19.482 "name": "BaseBdev3", 00:14:19.482 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:19.482 "is_configured": true, 00:14:19.482 "data_offset": 0, 00:14:19.482 "data_size": 65536 00:14:19.482 }, 00:14:19.482 { 00:14:19.482 "name": "BaseBdev4", 00:14:19.482 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:19.482 "is_configured": true, 00:14:19.482 "data_offset": 0, 00:14:19.482 "data_size": 65536 00:14:19.482 } 00:14:19.482 ] 00:14:19.482 }' 00:14:19.482 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.482 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.777 [2024-11-20 07:11:01.978262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.777 BaseBdev1 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.777 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.777 [ 00:14:19.777 { 00:14:19.777 "name": "BaseBdev1", 00:14:19.777 "aliases": [ 00:14:19.777 "c7185d5b-7e01-42d0-8fd8-53d695b53937" 00:14:19.777 ], 00:14:19.777 "product_name": "Malloc disk", 00:14:19.777 "block_size": 512, 00:14:19.777 "num_blocks": 65536, 00:14:19.777 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:19.777 "assigned_rate_limits": { 00:14:19.777 "rw_ios_per_sec": 0, 00:14:19.777 "rw_mbytes_per_sec": 0, 00:14:19.777 "r_mbytes_per_sec": 0, 00:14:19.777 "w_mbytes_per_sec": 0 00:14:19.777 }, 00:14:19.777 "claimed": true, 00:14:19.777 "claim_type": "exclusive_write", 00:14:19.777 "zoned": false, 00:14:19.777 "supported_io_types": { 00:14:19.777 "read": true, 00:14:19.777 "write": true, 00:14:19.777 "unmap": true, 00:14:19.777 "flush": true, 00:14:19.777 "reset": true, 00:14:19.777 "nvme_admin": false, 00:14:19.777 "nvme_io": false, 00:14:19.777 "nvme_io_md": false, 00:14:19.777 "write_zeroes": true, 00:14:19.777 "zcopy": true, 00:14:19.777 "get_zone_info": false, 00:14:19.777 "zone_management": false, 00:14:19.777 "zone_append": false, 00:14:19.777 "compare": false, 00:14:19.777 "compare_and_write": false, 00:14:19.777 "abort": true, 00:14:19.777 "seek_hole": false, 00:14:19.777 "seek_data": false, 00:14:19.777 "copy": true, 00:14:19.777 "nvme_iov_md": false 00:14:19.777 }, 00:14:19.777 "memory_domains": [ 00:14:19.777 { 00:14:19.777 "dma_device_id": "system", 00:14:19.777 "dma_device_type": 1 00:14:19.777 }, 00:14:19.777 { 00:14:19.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.777 "dma_device_type": 2 00:14:19.777 } 00:14:19.777 ], 00:14:19.777 "driver_specific": {} 00:14:19.777 } 00:14:19.777 ] 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.777 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.036 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.036 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.036 "name": "Existed_Raid", 00:14:20.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.036 "strip_size_kb": 64, 00:14:20.036 "state": "configuring", 00:14:20.036 "raid_level": "concat", 00:14:20.036 "superblock": false, 00:14:20.036 "num_base_bdevs": 4, 00:14:20.036 "num_base_bdevs_discovered": 3, 00:14:20.036 "num_base_bdevs_operational": 4, 00:14:20.036 "base_bdevs_list": [ 00:14:20.036 { 00:14:20.036 "name": "BaseBdev1", 00:14:20.036 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 0, 00:14:20.036 "data_size": 65536 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": null, 00:14:20.036 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:20.036 "is_configured": false, 00:14:20.036 "data_offset": 0, 00:14:20.036 "data_size": 65536 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": "BaseBdev3", 00:14:20.036 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 0, 00:14:20.036 "data_size": 65536 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": "BaseBdev4", 00:14:20.036 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 0, 00:14:20.036 "data_size": 65536 00:14:20.036 } 00:14:20.036 ] 00:14:20.036 }' 00:14:20.036 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.036 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.294 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.294 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.294 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.294 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.294 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.552 [2024-11-20 07:11:02.585404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.552 "name": "Existed_Raid", 00:14:20.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.552 "strip_size_kb": 64, 00:14:20.552 "state": "configuring", 00:14:20.552 "raid_level": "concat", 00:14:20.552 "superblock": false, 00:14:20.552 "num_base_bdevs": 4, 00:14:20.552 "num_base_bdevs_discovered": 2, 00:14:20.552 "num_base_bdevs_operational": 4, 00:14:20.552 "base_bdevs_list": [ 00:14:20.552 { 00:14:20.552 "name": "BaseBdev1", 00:14:20.552 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:20.552 "is_configured": true, 00:14:20.552 "data_offset": 0, 00:14:20.552 "data_size": 65536 00:14:20.552 }, 00:14:20.552 { 00:14:20.552 "name": null, 00:14:20.552 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:20.552 "is_configured": false, 00:14:20.552 "data_offset": 0, 00:14:20.552 "data_size": 65536 00:14:20.552 }, 00:14:20.552 { 00:14:20.552 "name": null, 00:14:20.552 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:20.552 "is_configured": false, 00:14:20.552 "data_offset": 0, 00:14:20.552 "data_size": 65536 00:14:20.552 }, 00:14:20.552 { 00:14:20.552 "name": "BaseBdev4", 00:14:20.552 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:20.552 "is_configured": true, 00:14:20.552 "data_offset": 0, 00:14:20.552 "data_size": 65536 00:14:20.552 } 00:14:20.552 ] 00:14:20.552 }' 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.552 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.812 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.812 [2024-11-20 07:11:03.072603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.070 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.071 "name": "Existed_Raid", 00:14:21.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.071 "strip_size_kb": 64, 00:14:21.071 "state": "configuring", 00:14:21.071 "raid_level": "concat", 00:14:21.071 "superblock": false, 00:14:21.071 "num_base_bdevs": 4, 00:14:21.071 "num_base_bdevs_discovered": 3, 00:14:21.071 "num_base_bdevs_operational": 4, 00:14:21.071 "base_bdevs_list": [ 00:14:21.071 { 00:14:21.071 "name": "BaseBdev1", 00:14:21.071 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:21.071 "is_configured": true, 00:14:21.071 "data_offset": 0, 00:14:21.071 "data_size": 65536 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "name": null, 00:14:21.071 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:21.071 "is_configured": false, 00:14:21.071 "data_offset": 0, 00:14:21.071 "data_size": 65536 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "name": "BaseBdev3", 00:14:21.071 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:21.071 "is_configured": true, 00:14:21.071 "data_offset": 0, 00:14:21.071 "data_size": 65536 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "name": "BaseBdev4", 00:14:21.071 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:21.071 "is_configured": true, 00:14:21.071 "data_offset": 0, 00:14:21.071 "data_size": 65536 00:14:21.071 } 00:14:21.071 ] 00:14:21.071 }' 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.071 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.328 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.328 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.328 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.329 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.329 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.588 [2024-11-20 07:11:03.615759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.588 "name": "Existed_Raid", 00:14:21.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.588 "strip_size_kb": 64, 00:14:21.588 "state": "configuring", 00:14:21.588 "raid_level": "concat", 00:14:21.588 "superblock": false, 00:14:21.588 "num_base_bdevs": 4, 00:14:21.588 "num_base_bdevs_discovered": 2, 00:14:21.588 "num_base_bdevs_operational": 4, 00:14:21.588 "base_bdevs_list": [ 00:14:21.588 { 00:14:21.588 "name": null, 00:14:21.588 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:21.588 "is_configured": false, 00:14:21.588 "data_offset": 0, 00:14:21.588 "data_size": 65536 00:14:21.588 }, 00:14:21.588 { 00:14:21.588 "name": null, 00:14:21.588 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:21.588 "is_configured": false, 00:14:21.588 "data_offset": 0, 00:14:21.588 "data_size": 65536 00:14:21.588 }, 00:14:21.588 { 00:14:21.588 "name": "BaseBdev3", 00:14:21.588 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:21.588 "is_configured": true, 00:14:21.588 "data_offset": 0, 00:14:21.588 "data_size": 65536 00:14:21.588 }, 00:14:21.588 { 00:14:21.588 "name": "BaseBdev4", 00:14:21.588 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:21.588 "is_configured": true, 00:14:21.588 "data_offset": 0, 00:14:21.588 "data_size": 65536 00:14:21.588 } 00:14:21.588 ] 00:14:21.588 }' 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.588 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.155 [2024-11-20 07:11:04.249418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.155 "name": "Existed_Raid", 00:14:22.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.155 "strip_size_kb": 64, 00:14:22.155 "state": "configuring", 00:14:22.155 "raid_level": "concat", 00:14:22.155 "superblock": false, 00:14:22.155 "num_base_bdevs": 4, 00:14:22.155 "num_base_bdevs_discovered": 3, 00:14:22.155 "num_base_bdevs_operational": 4, 00:14:22.155 "base_bdevs_list": [ 00:14:22.155 { 00:14:22.155 "name": null, 00:14:22.155 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:22.155 "is_configured": false, 00:14:22.155 "data_offset": 0, 00:14:22.155 "data_size": 65536 00:14:22.155 }, 00:14:22.155 { 00:14:22.155 "name": "BaseBdev2", 00:14:22.155 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:22.155 "is_configured": true, 00:14:22.155 "data_offset": 0, 00:14:22.155 "data_size": 65536 00:14:22.155 }, 00:14:22.155 { 00:14:22.155 "name": "BaseBdev3", 00:14:22.155 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:22.155 "is_configured": true, 00:14:22.155 "data_offset": 0, 00:14:22.155 "data_size": 65536 00:14:22.155 }, 00:14:22.155 { 00:14:22.155 "name": "BaseBdev4", 00:14:22.155 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:22.155 "is_configured": true, 00:14:22.155 "data_offset": 0, 00:14:22.155 "data_size": 65536 00:14:22.155 } 00:14:22.155 ] 00:14:22.155 }' 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.155 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c7185d5b-7e01-42d0-8fd8-53d695b53937 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 [2024-11-20 07:11:04.816487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.724 [2024-11-20 07:11:04.816667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.724 [2024-11-20 07:11:04.816694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:22.724 [2024-11-20 07:11:04.816994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:22.724 [2024-11-20 07:11:04.817203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.724 [2024-11-20 07:11:04.817218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:22.724 [2024-11-20 07:11:04.817517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.724 NewBaseBdev 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 [ 00:14:22.724 { 00:14:22.724 "name": "NewBaseBdev", 00:14:22.724 "aliases": [ 00:14:22.724 "c7185d5b-7e01-42d0-8fd8-53d695b53937" 00:14:22.724 ], 00:14:22.724 "product_name": "Malloc disk", 00:14:22.724 "block_size": 512, 00:14:22.724 "num_blocks": 65536, 00:14:22.724 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:22.724 "assigned_rate_limits": { 00:14:22.724 "rw_ios_per_sec": 0, 00:14:22.724 "rw_mbytes_per_sec": 0, 00:14:22.724 "r_mbytes_per_sec": 0, 00:14:22.724 "w_mbytes_per_sec": 0 00:14:22.724 }, 00:14:22.724 "claimed": true, 00:14:22.724 "claim_type": "exclusive_write", 00:14:22.724 "zoned": false, 00:14:22.724 "supported_io_types": { 00:14:22.724 "read": true, 00:14:22.724 "write": true, 00:14:22.724 "unmap": true, 00:14:22.724 "flush": true, 00:14:22.724 "reset": true, 00:14:22.724 "nvme_admin": false, 00:14:22.724 "nvme_io": false, 00:14:22.724 "nvme_io_md": false, 00:14:22.724 "write_zeroes": true, 00:14:22.724 "zcopy": true, 00:14:22.724 "get_zone_info": false, 00:14:22.724 "zone_management": false, 00:14:22.724 "zone_append": false, 00:14:22.724 "compare": false, 00:14:22.724 "compare_and_write": false, 00:14:22.724 "abort": true, 00:14:22.724 "seek_hole": false, 00:14:22.724 "seek_data": false, 00:14:22.724 "copy": true, 00:14:22.724 "nvme_iov_md": false 00:14:22.724 }, 00:14:22.724 "memory_domains": [ 00:14:22.724 { 00:14:22.724 "dma_device_id": "system", 00:14:22.724 "dma_device_type": 1 00:14:22.724 }, 00:14:22.724 { 00:14:22.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.724 "dma_device_type": 2 00:14:22.724 } 00:14:22.724 ], 00:14:22.724 "driver_specific": {} 00:14:22.724 } 00:14:22.724 ] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:22.724 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.725 "name": "Existed_Raid", 00:14:22.725 "uuid": "0076f5e9-f9ef-4e9e-a90b-dd88f4931a68", 00:14:22.725 "strip_size_kb": 64, 00:14:22.725 "state": "online", 00:14:22.725 "raid_level": "concat", 00:14:22.725 "superblock": false, 00:14:22.725 "num_base_bdevs": 4, 00:14:22.725 "num_base_bdevs_discovered": 4, 00:14:22.725 "num_base_bdevs_operational": 4, 00:14:22.725 "base_bdevs_list": [ 00:14:22.725 { 00:14:22.725 "name": "NewBaseBdev", 00:14:22.725 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:22.725 "is_configured": true, 00:14:22.725 "data_offset": 0, 00:14:22.725 "data_size": 65536 00:14:22.725 }, 00:14:22.725 { 00:14:22.725 "name": "BaseBdev2", 00:14:22.725 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:22.725 "is_configured": true, 00:14:22.725 "data_offset": 0, 00:14:22.725 "data_size": 65536 00:14:22.725 }, 00:14:22.725 { 00:14:22.725 "name": "BaseBdev3", 00:14:22.725 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:22.725 "is_configured": true, 00:14:22.725 "data_offset": 0, 00:14:22.725 "data_size": 65536 00:14:22.725 }, 00:14:22.725 { 00:14:22.725 "name": "BaseBdev4", 00:14:22.725 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:22.725 "is_configured": true, 00:14:22.725 "data_offset": 0, 00:14:22.725 "data_size": 65536 00:14:22.725 } 00:14:22.725 ] 00:14:22.725 }' 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.725 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 [2024-11-20 07:11:05.336076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.298 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.299 "name": "Existed_Raid", 00:14:23.299 "aliases": [ 00:14:23.299 "0076f5e9-f9ef-4e9e-a90b-dd88f4931a68" 00:14:23.299 ], 00:14:23.299 "product_name": "Raid Volume", 00:14:23.299 "block_size": 512, 00:14:23.299 "num_blocks": 262144, 00:14:23.299 "uuid": "0076f5e9-f9ef-4e9e-a90b-dd88f4931a68", 00:14:23.299 "assigned_rate_limits": { 00:14:23.299 "rw_ios_per_sec": 0, 00:14:23.299 "rw_mbytes_per_sec": 0, 00:14:23.299 "r_mbytes_per_sec": 0, 00:14:23.299 "w_mbytes_per_sec": 0 00:14:23.299 }, 00:14:23.299 "claimed": false, 00:14:23.299 "zoned": false, 00:14:23.299 "supported_io_types": { 00:14:23.299 "read": true, 00:14:23.299 "write": true, 00:14:23.299 "unmap": true, 00:14:23.299 "flush": true, 00:14:23.299 "reset": true, 00:14:23.299 "nvme_admin": false, 00:14:23.299 "nvme_io": false, 00:14:23.299 "nvme_io_md": false, 00:14:23.299 "write_zeroes": true, 00:14:23.299 "zcopy": false, 00:14:23.299 "get_zone_info": false, 00:14:23.299 "zone_management": false, 00:14:23.299 "zone_append": false, 00:14:23.299 "compare": false, 00:14:23.299 "compare_and_write": false, 00:14:23.299 "abort": false, 00:14:23.299 "seek_hole": false, 00:14:23.299 "seek_data": false, 00:14:23.299 "copy": false, 00:14:23.299 "nvme_iov_md": false 00:14:23.299 }, 00:14:23.299 "memory_domains": [ 00:14:23.299 { 00:14:23.299 "dma_device_id": "system", 00:14:23.299 "dma_device_type": 1 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.299 "dma_device_type": 2 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "system", 00:14:23.299 "dma_device_type": 1 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.299 "dma_device_type": 2 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "system", 00:14:23.299 "dma_device_type": 1 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.299 "dma_device_type": 2 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "system", 00:14:23.299 "dma_device_type": 1 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.299 "dma_device_type": 2 00:14:23.299 } 00:14:23.299 ], 00:14:23.299 "driver_specific": { 00:14:23.299 "raid": { 00:14:23.299 "uuid": "0076f5e9-f9ef-4e9e-a90b-dd88f4931a68", 00:14:23.299 "strip_size_kb": 64, 00:14:23.299 "state": "online", 00:14:23.299 "raid_level": "concat", 00:14:23.299 "superblock": false, 00:14:23.299 "num_base_bdevs": 4, 00:14:23.299 "num_base_bdevs_discovered": 4, 00:14:23.299 "num_base_bdevs_operational": 4, 00:14:23.299 "base_bdevs_list": [ 00:14:23.299 { 00:14:23.299 "name": "NewBaseBdev", 00:14:23.299 "uuid": "c7185d5b-7e01-42d0-8fd8-53d695b53937", 00:14:23.299 "is_configured": true, 00:14:23.299 "data_offset": 0, 00:14:23.299 "data_size": 65536 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "name": "BaseBdev2", 00:14:23.299 "uuid": "4e14735d-af55-44f6-a3be-4ad0b0b73da9", 00:14:23.299 "is_configured": true, 00:14:23.299 "data_offset": 0, 00:14:23.299 "data_size": 65536 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "name": "BaseBdev3", 00:14:23.299 "uuid": "34f5ef05-307d-4fe9-8628-2e376125fa20", 00:14:23.299 "is_configured": true, 00:14:23.299 "data_offset": 0, 00:14:23.299 "data_size": 65536 00:14:23.299 }, 00:14:23.299 { 00:14:23.299 "name": "BaseBdev4", 00:14:23.299 "uuid": "dbb609ae-5692-4fcd-8ecc-97c8a22a370f", 00:14:23.299 "is_configured": true, 00:14:23.299 "data_offset": 0, 00:14:23.299 "data_size": 65536 00:14:23.299 } 00:14:23.299 ] 00:14:23.299 } 00:14:23.299 } 00:14:23.299 }' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:23.299 BaseBdev2 00:14:23.299 BaseBdev3 00:14:23.299 BaseBdev4' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.299 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.557 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.557 [2024-11-20 07:11:05.667170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.557 [2024-11-20 07:11:05.667292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.558 [2024-11-20 07:11:05.667460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.558 [2024-11-20 07:11:05.667560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.558 [2024-11-20 07:11:05.667572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71603 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71603 ']' 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71603 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71603 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.558 killing process with pid 71603 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71603' 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71603 00:14:23.558 [2024-11-20 07:11:05.714400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.558 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71603 00:14:24.123 [2024-11-20 07:11:06.156581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.497 00:14:25.497 real 0m12.104s 00:14:25.497 user 0m19.164s 00:14:25.497 sys 0m2.103s 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.497 ************************************ 00:14:25.497 END TEST raid_state_function_test 00:14:25.497 ************************************ 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.497 07:11:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:25.497 07:11:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:25.497 07:11:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.497 07:11:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.497 ************************************ 00:14:25.497 START TEST raid_state_function_test_sb 00:14:25.497 ************************************ 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:25.497 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72280 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72280' 00:14:25.498 Process raid pid: 72280 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72280 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72280 ']' 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.498 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 [2024-11-20 07:11:07.560881] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:25.498 [2024-11-20 07:11:07.561011] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.498 [2024-11-20 07:11:07.741714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.755 [2024-11-20 07:11:07.866346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.012 [2024-11-20 07:11:08.090710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.012 [2024-11-20 07:11:08.090755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.270 [2024-11-20 07:11:08.435916] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.270 [2024-11-20 07:11:08.436031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.270 [2024-11-20 07:11:08.436066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.270 [2024-11-20 07:11:08.436094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.270 [2024-11-20 07:11:08.436115] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.270 [2024-11-20 07:11:08.436140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.270 [2024-11-20 07:11:08.436160] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:26.270 [2024-11-20 07:11:08.436214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.270 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.270 "name": "Existed_Raid", 00:14:26.270 "uuid": "804ca5aa-6c08-45a3-b685-fb8d091d3fc1", 00:14:26.270 "strip_size_kb": 64, 00:14:26.270 "state": "configuring", 00:14:26.270 "raid_level": "concat", 00:14:26.270 "superblock": true, 00:14:26.270 "num_base_bdevs": 4, 00:14:26.270 "num_base_bdevs_discovered": 0, 00:14:26.270 "num_base_bdevs_operational": 4, 00:14:26.270 "base_bdevs_list": [ 00:14:26.270 { 00:14:26.271 "name": "BaseBdev1", 00:14:26.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.271 "is_configured": false, 00:14:26.271 "data_offset": 0, 00:14:26.271 "data_size": 0 00:14:26.271 }, 00:14:26.271 { 00:14:26.271 "name": "BaseBdev2", 00:14:26.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.271 "is_configured": false, 00:14:26.271 "data_offset": 0, 00:14:26.271 "data_size": 0 00:14:26.271 }, 00:14:26.271 { 00:14:26.271 "name": "BaseBdev3", 00:14:26.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.271 "is_configured": false, 00:14:26.271 "data_offset": 0, 00:14:26.271 "data_size": 0 00:14:26.271 }, 00:14:26.271 { 00:14:26.271 "name": "BaseBdev4", 00:14:26.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.271 "is_configured": false, 00:14:26.271 "data_offset": 0, 00:14:26.271 "data_size": 0 00:14:26.271 } 00:14:26.271 ] 00:14:26.271 }' 00:14:26.271 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.271 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.838 [2024-11-20 07:11:08.911110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.838 [2024-11-20 07:11:08.911207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.838 [2024-11-20 07:11:08.923093] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.838 [2024-11-20 07:11:08.923185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.838 [2024-11-20 07:11:08.923226] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.838 [2024-11-20 07:11:08.923253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.838 [2024-11-20 07:11:08.923292] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.838 [2024-11-20 07:11:08.923319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.838 [2024-11-20 07:11:08.923381] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:26.838 [2024-11-20 07:11:08.923418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.838 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.838 [2024-11-20 07:11:08.970287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.838 BaseBdev1 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 [ 00:14:26.839 { 00:14:26.839 "name": "BaseBdev1", 00:14:26.839 "aliases": [ 00:14:26.839 "a07721ef-f89a-4867-ba31-f611fd61f12d" 00:14:26.839 ], 00:14:26.839 "product_name": "Malloc disk", 00:14:26.839 "block_size": 512, 00:14:26.839 "num_blocks": 65536, 00:14:26.839 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:26.839 "assigned_rate_limits": { 00:14:26.839 "rw_ios_per_sec": 0, 00:14:26.839 "rw_mbytes_per_sec": 0, 00:14:26.839 "r_mbytes_per_sec": 0, 00:14:26.839 "w_mbytes_per_sec": 0 00:14:26.839 }, 00:14:26.839 "claimed": true, 00:14:26.839 "claim_type": "exclusive_write", 00:14:26.839 "zoned": false, 00:14:26.839 "supported_io_types": { 00:14:26.839 "read": true, 00:14:26.839 "write": true, 00:14:26.839 "unmap": true, 00:14:26.839 "flush": true, 00:14:26.839 "reset": true, 00:14:26.839 "nvme_admin": false, 00:14:26.839 "nvme_io": false, 00:14:26.839 "nvme_io_md": false, 00:14:26.839 "write_zeroes": true, 00:14:26.839 "zcopy": true, 00:14:26.839 "get_zone_info": false, 00:14:26.839 "zone_management": false, 00:14:26.839 "zone_append": false, 00:14:26.839 "compare": false, 00:14:26.839 "compare_and_write": false, 00:14:26.839 "abort": true, 00:14:26.839 "seek_hole": false, 00:14:26.839 "seek_data": false, 00:14:26.839 "copy": true, 00:14:26.839 "nvme_iov_md": false 00:14:26.839 }, 00:14:26.839 "memory_domains": [ 00:14:26.839 { 00:14:26.839 "dma_device_id": "system", 00:14:26.839 "dma_device_type": 1 00:14:26.839 }, 00:14:26.839 { 00:14:26.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.839 "dma_device_type": 2 00:14:26.839 } 00:14:26.839 ], 00:14:26.839 "driver_specific": {} 00:14:26.839 } 00:14:26.839 ] 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.839 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.839 "name": "Existed_Raid", 00:14:26.839 "uuid": "41aa5a88-8069-41aa-9772-0cf53fac2598", 00:14:26.839 "strip_size_kb": 64, 00:14:26.839 "state": "configuring", 00:14:26.839 "raid_level": "concat", 00:14:26.839 "superblock": true, 00:14:26.839 "num_base_bdevs": 4, 00:14:26.839 "num_base_bdevs_discovered": 1, 00:14:26.839 "num_base_bdevs_operational": 4, 00:14:26.839 "base_bdevs_list": [ 00:14:26.839 { 00:14:26.839 "name": "BaseBdev1", 00:14:26.839 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:26.839 "is_configured": true, 00:14:26.839 "data_offset": 2048, 00:14:26.839 "data_size": 63488 00:14:26.839 }, 00:14:26.839 { 00:14:26.839 "name": "BaseBdev2", 00:14:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.839 "is_configured": false, 00:14:26.839 "data_offset": 0, 00:14:26.839 "data_size": 0 00:14:26.839 }, 00:14:26.839 { 00:14:26.839 "name": "BaseBdev3", 00:14:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.839 "is_configured": false, 00:14:26.839 "data_offset": 0, 00:14:26.839 "data_size": 0 00:14:26.839 }, 00:14:26.839 { 00:14:26.839 "name": "BaseBdev4", 00:14:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.839 "is_configured": false, 00:14:26.839 "data_offset": 0, 00:14:26.839 "data_size": 0 00:14:26.839 } 00:14:26.839 ] 00:14:26.839 }' 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.839 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.405 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.405 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.406 [2024-11-20 07:11:09.413618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.406 [2024-11-20 07:11:09.413677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.406 [2024-11-20 07:11:09.421651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.406 [2024-11-20 07:11:09.423511] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.406 [2024-11-20 07:11:09.423551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.406 [2024-11-20 07:11:09.423561] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.406 [2024-11-20 07:11:09.423572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.406 [2024-11-20 07:11:09.423579] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.406 [2024-11-20 07:11:09.423588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.406 "name": "Existed_Raid", 00:14:27.406 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:27.406 "strip_size_kb": 64, 00:14:27.406 "state": "configuring", 00:14:27.406 "raid_level": "concat", 00:14:27.406 "superblock": true, 00:14:27.406 "num_base_bdevs": 4, 00:14:27.406 "num_base_bdevs_discovered": 1, 00:14:27.406 "num_base_bdevs_operational": 4, 00:14:27.406 "base_bdevs_list": [ 00:14:27.406 { 00:14:27.406 "name": "BaseBdev1", 00:14:27.406 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:27.406 "is_configured": true, 00:14:27.406 "data_offset": 2048, 00:14:27.406 "data_size": 63488 00:14:27.406 }, 00:14:27.406 { 00:14:27.406 "name": "BaseBdev2", 00:14:27.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.406 "is_configured": false, 00:14:27.406 "data_offset": 0, 00:14:27.406 "data_size": 0 00:14:27.406 }, 00:14:27.406 { 00:14:27.406 "name": "BaseBdev3", 00:14:27.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.406 "is_configured": false, 00:14:27.406 "data_offset": 0, 00:14:27.406 "data_size": 0 00:14:27.406 }, 00:14:27.406 { 00:14:27.406 "name": "BaseBdev4", 00:14:27.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.406 "is_configured": false, 00:14:27.406 "data_offset": 0, 00:14:27.406 "data_size": 0 00:14:27.406 } 00:14:27.406 ] 00:14:27.406 }' 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.406 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.664 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.664 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.664 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.664 [2024-11-20 07:11:09.925883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.922 BaseBdev2 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.922 [ 00:14:27.922 { 00:14:27.922 "name": "BaseBdev2", 00:14:27.922 "aliases": [ 00:14:27.922 "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f" 00:14:27.922 ], 00:14:27.922 "product_name": "Malloc disk", 00:14:27.922 "block_size": 512, 00:14:27.922 "num_blocks": 65536, 00:14:27.922 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:27.922 "assigned_rate_limits": { 00:14:27.922 "rw_ios_per_sec": 0, 00:14:27.922 "rw_mbytes_per_sec": 0, 00:14:27.922 "r_mbytes_per_sec": 0, 00:14:27.922 "w_mbytes_per_sec": 0 00:14:27.922 }, 00:14:27.922 "claimed": true, 00:14:27.922 "claim_type": "exclusive_write", 00:14:27.922 "zoned": false, 00:14:27.922 "supported_io_types": { 00:14:27.922 "read": true, 00:14:27.922 "write": true, 00:14:27.922 "unmap": true, 00:14:27.922 "flush": true, 00:14:27.922 "reset": true, 00:14:27.922 "nvme_admin": false, 00:14:27.922 "nvme_io": false, 00:14:27.922 "nvme_io_md": false, 00:14:27.922 "write_zeroes": true, 00:14:27.922 "zcopy": true, 00:14:27.922 "get_zone_info": false, 00:14:27.922 "zone_management": false, 00:14:27.922 "zone_append": false, 00:14:27.922 "compare": false, 00:14:27.922 "compare_and_write": false, 00:14:27.922 "abort": true, 00:14:27.922 "seek_hole": false, 00:14:27.922 "seek_data": false, 00:14:27.922 "copy": true, 00:14:27.922 "nvme_iov_md": false 00:14:27.922 }, 00:14:27.922 "memory_domains": [ 00:14:27.922 { 00:14:27.922 "dma_device_id": "system", 00:14:27.922 "dma_device_type": 1 00:14:27.922 }, 00:14:27.922 { 00:14:27.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.922 "dma_device_type": 2 00:14:27.922 } 00:14:27.922 ], 00:14:27.922 "driver_specific": {} 00:14:27.922 } 00:14:27.922 ] 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.922 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.922 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.922 "name": "Existed_Raid", 00:14:27.922 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:27.922 "strip_size_kb": 64, 00:14:27.922 "state": "configuring", 00:14:27.922 "raid_level": "concat", 00:14:27.922 "superblock": true, 00:14:27.922 "num_base_bdevs": 4, 00:14:27.922 "num_base_bdevs_discovered": 2, 00:14:27.922 "num_base_bdevs_operational": 4, 00:14:27.922 "base_bdevs_list": [ 00:14:27.922 { 00:14:27.922 "name": "BaseBdev1", 00:14:27.922 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 }, 00:14:27.922 { 00:14:27.922 "name": "BaseBdev2", 00:14:27.922 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:27.922 "is_configured": true, 00:14:27.923 "data_offset": 2048, 00:14:27.923 "data_size": 63488 00:14:27.923 }, 00:14:27.923 { 00:14:27.923 "name": "BaseBdev3", 00:14:27.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.923 "is_configured": false, 00:14:27.923 "data_offset": 0, 00:14:27.923 "data_size": 0 00:14:27.923 }, 00:14:27.923 { 00:14:27.923 "name": "BaseBdev4", 00:14:27.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.923 "is_configured": false, 00:14:27.923 "data_offset": 0, 00:14:27.923 "data_size": 0 00:14:27.923 } 00:14:27.923 ] 00:14:27.923 }' 00:14:27.923 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.923 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.181 [2024-11-20 07:11:10.425074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.181 BaseBdev3 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.181 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.439 [ 00:14:28.439 { 00:14:28.439 "name": "BaseBdev3", 00:14:28.440 "aliases": [ 00:14:28.440 "2a5f52ec-9469-4f70-b01c-33e900285148" 00:14:28.440 ], 00:14:28.440 "product_name": "Malloc disk", 00:14:28.440 "block_size": 512, 00:14:28.440 "num_blocks": 65536, 00:14:28.440 "uuid": "2a5f52ec-9469-4f70-b01c-33e900285148", 00:14:28.440 "assigned_rate_limits": { 00:14:28.440 "rw_ios_per_sec": 0, 00:14:28.440 "rw_mbytes_per_sec": 0, 00:14:28.440 "r_mbytes_per_sec": 0, 00:14:28.440 "w_mbytes_per_sec": 0 00:14:28.440 }, 00:14:28.440 "claimed": true, 00:14:28.440 "claim_type": "exclusive_write", 00:14:28.440 "zoned": false, 00:14:28.440 "supported_io_types": { 00:14:28.440 "read": true, 00:14:28.440 "write": true, 00:14:28.440 "unmap": true, 00:14:28.440 "flush": true, 00:14:28.440 "reset": true, 00:14:28.440 "nvme_admin": false, 00:14:28.440 "nvme_io": false, 00:14:28.440 "nvme_io_md": false, 00:14:28.440 "write_zeroes": true, 00:14:28.440 "zcopy": true, 00:14:28.440 "get_zone_info": false, 00:14:28.440 "zone_management": false, 00:14:28.440 "zone_append": false, 00:14:28.440 "compare": false, 00:14:28.440 "compare_and_write": false, 00:14:28.440 "abort": true, 00:14:28.440 "seek_hole": false, 00:14:28.440 "seek_data": false, 00:14:28.440 "copy": true, 00:14:28.440 "nvme_iov_md": false 00:14:28.440 }, 00:14:28.440 "memory_domains": [ 00:14:28.440 { 00:14:28.440 "dma_device_id": "system", 00:14:28.440 "dma_device_type": 1 00:14:28.440 }, 00:14:28.440 { 00:14:28.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.440 "dma_device_type": 2 00:14:28.440 } 00:14:28.440 ], 00:14:28.440 "driver_specific": {} 00:14:28.440 } 00:14:28.440 ] 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.440 "name": "Existed_Raid", 00:14:28.440 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:28.440 "strip_size_kb": 64, 00:14:28.440 "state": "configuring", 00:14:28.440 "raid_level": "concat", 00:14:28.440 "superblock": true, 00:14:28.440 "num_base_bdevs": 4, 00:14:28.440 "num_base_bdevs_discovered": 3, 00:14:28.440 "num_base_bdevs_operational": 4, 00:14:28.440 "base_bdevs_list": [ 00:14:28.440 { 00:14:28.440 "name": "BaseBdev1", 00:14:28.440 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:28.440 "is_configured": true, 00:14:28.440 "data_offset": 2048, 00:14:28.440 "data_size": 63488 00:14:28.440 }, 00:14:28.440 { 00:14:28.440 "name": "BaseBdev2", 00:14:28.440 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:28.440 "is_configured": true, 00:14:28.440 "data_offset": 2048, 00:14:28.440 "data_size": 63488 00:14:28.440 }, 00:14:28.440 { 00:14:28.440 "name": "BaseBdev3", 00:14:28.440 "uuid": "2a5f52ec-9469-4f70-b01c-33e900285148", 00:14:28.440 "is_configured": true, 00:14:28.440 "data_offset": 2048, 00:14:28.440 "data_size": 63488 00:14:28.440 }, 00:14:28.440 { 00:14:28.440 "name": "BaseBdev4", 00:14:28.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.440 "is_configured": false, 00:14:28.440 "data_offset": 0, 00:14:28.440 "data_size": 0 00:14:28.440 } 00:14:28.440 ] 00:14:28.440 }' 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.440 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.698 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:28.698 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.698 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.957 [2024-11-20 07:11:11.000090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.957 [2024-11-20 07:11:11.000410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.957 [2024-11-20 07:11:11.000427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:28.957 [2024-11-20 07:11:11.000729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:28.957 [2024-11-20 07:11:11.000910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.957 [2024-11-20 07:11:11.000925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:28.957 BaseBdev4 00:14:28.957 [2024-11-20 07:11:11.001110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.957 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.957 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:28.957 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:28.957 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.957 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 [ 00:14:28.958 { 00:14:28.958 "name": "BaseBdev4", 00:14:28.958 "aliases": [ 00:14:28.958 "aaeb9ca4-88fb-4205-9540-69a80e3b9ff7" 00:14:28.958 ], 00:14:28.958 "product_name": "Malloc disk", 00:14:28.958 "block_size": 512, 00:14:28.958 "num_blocks": 65536, 00:14:28.958 "uuid": "aaeb9ca4-88fb-4205-9540-69a80e3b9ff7", 00:14:28.958 "assigned_rate_limits": { 00:14:28.958 "rw_ios_per_sec": 0, 00:14:28.958 "rw_mbytes_per_sec": 0, 00:14:28.958 "r_mbytes_per_sec": 0, 00:14:28.958 "w_mbytes_per_sec": 0 00:14:28.958 }, 00:14:28.958 "claimed": true, 00:14:28.958 "claim_type": "exclusive_write", 00:14:28.958 "zoned": false, 00:14:28.958 "supported_io_types": { 00:14:28.958 "read": true, 00:14:28.958 "write": true, 00:14:28.958 "unmap": true, 00:14:28.958 "flush": true, 00:14:28.958 "reset": true, 00:14:28.958 "nvme_admin": false, 00:14:28.958 "nvme_io": false, 00:14:28.958 "nvme_io_md": false, 00:14:28.958 "write_zeroes": true, 00:14:28.958 "zcopy": true, 00:14:28.958 "get_zone_info": false, 00:14:28.958 "zone_management": false, 00:14:28.958 "zone_append": false, 00:14:28.958 "compare": false, 00:14:28.958 "compare_and_write": false, 00:14:28.958 "abort": true, 00:14:28.958 "seek_hole": false, 00:14:28.958 "seek_data": false, 00:14:28.958 "copy": true, 00:14:28.958 "nvme_iov_md": false 00:14:28.958 }, 00:14:28.958 "memory_domains": [ 00:14:28.958 { 00:14:28.958 "dma_device_id": "system", 00:14:28.958 "dma_device_type": 1 00:14:28.958 }, 00:14:28.958 { 00:14:28.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.958 "dma_device_type": 2 00:14:28.958 } 00:14:28.958 ], 00:14:28.958 "driver_specific": {} 00:14:28.958 } 00:14:28.958 ] 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.958 "name": "Existed_Raid", 00:14:28.958 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:28.958 "strip_size_kb": 64, 00:14:28.958 "state": "online", 00:14:28.958 "raid_level": "concat", 00:14:28.958 "superblock": true, 00:14:28.958 "num_base_bdevs": 4, 00:14:28.958 "num_base_bdevs_discovered": 4, 00:14:28.958 "num_base_bdevs_operational": 4, 00:14:28.958 "base_bdevs_list": [ 00:14:28.958 { 00:14:28.958 "name": "BaseBdev1", 00:14:28.958 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:28.958 "is_configured": true, 00:14:28.958 "data_offset": 2048, 00:14:28.958 "data_size": 63488 00:14:28.958 }, 00:14:28.958 { 00:14:28.958 "name": "BaseBdev2", 00:14:28.958 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:28.958 "is_configured": true, 00:14:28.958 "data_offset": 2048, 00:14:28.958 "data_size": 63488 00:14:28.958 }, 00:14:28.958 { 00:14:28.958 "name": "BaseBdev3", 00:14:28.958 "uuid": "2a5f52ec-9469-4f70-b01c-33e900285148", 00:14:28.958 "is_configured": true, 00:14:28.958 "data_offset": 2048, 00:14:28.958 "data_size": 63488 00:14:28.958 }, 00:14:28.958 { 00:14:28.958 "name": "BaseBdev4", 00:14:28.958 "uuid": "aaeb9ca4-88fb-4205-9540-69a80e3b9ff7", 00:14:28.958 "is_configured": true, 00:14:28.958 "data_offset": 2048, 00:14:28.958 "data_size": 63488 00:14:28.958 } 00:14:28.958 ] 00:14:28.958 }' 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.958 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.216 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.216 [2024-11-20 07:11:11.471744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.474 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.474 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.474 "name": "Existed_Raid", 00:14:29.474 "aliases": [ 00:14:29.474 "fadfd8f6-55fd-4918-9422-1a92fa97176a" 00:14:29.474 ], 00:14:29.474 "product_name": "Raid Volume", 00:14:29.474 "block_size": 512, 00:14:29.474 "num_blocks": 253952, 00:14:29.474 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:29.474 "assigned_rate_limits": { 00:14:29.474 "rw_ios_per_sec": 0, 00:14:29.474 "rw_mbytes_per_sec": 0, 00:14:29.474 "r_mbytes_per_sec": 0, 00:14:29.474 "w_mbytes_per_sec": 0 00:14:29.474 }, 00:14:29.474 "claimed": false, 00:14:29.474 "zoned": false, 00:14:29.474 "supported_io_types": { 00:14:29.474 "read": true, 00:14:29.474 "write": true, 00:14:29.474 "unmap": true, 00:14:29.474 "flush": true, 00:14:29.474 "reset": true, 00:14:29.474 "nvme_admin": false, 00:14:29.474 "nvme_io": false, 00:14:29.474 "nvme_io_md": false, 00:14:29.474 "write_zeroes": true, 00:14:29.474 "zcopy": false, 00:14:29.474 "get_zone_info": false, 00:14:29.474 "zone_management": false, 00:14:29.474 "zone_append": false, 00:14:29.474 "compare": false, 00:14:29.474 "compare_and_write": false, 00:14:29.474 "abort": false, 00:14:29.474 "seek_hole": false, 00:14:29.474 "seek_data": false, 00:14:29.474 "copy": false, 00:14:29.474 "nvme_iov_md": false 00:14:29.474 }, 00:14:29.474 "memory_domains": [ 00:14:29.474 { 00:14:29.474 "dma_device_id": "system", 00:14:29.474 "dma_device_type": 1 00:14:29.474 }, 00:14:29.474 { 00:14:29.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.474 "dma_device_type": 2 00:14:29.474 }, 00:14:29.474 { 00:14:29.474 "dma_device_id": "system", 00:14:29.474 "dma_device_type": 1 00:14:29.474 }, 00:14:29.474 { 00:14:29.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.474 "dma_device_type": 2 00:14:29.474 }, 00:14:29.474 { 00:14:29.474 "dma_device_id": "system", 00:14:29.474 "dma_device_type": 1 00:14:29.474 }, 00:14:29.474 { 00:14:29.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.474 "dma_device_type": 2 00:14:29.474 }, 00:14:29.475 { 00:14:29.475 "dma_device_id": "system", 00:14:29.475 "dma_device_type": 1 00:14:29.475 }, 00:14:29.475 { 00:14:29.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.475 "dma_device_type": 2 00:14:29.475 } 00:14:29.475 ], 00:14:29.475 "driver_specific": { 00:14:29.475 "raid": { 00:14:29.475 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:29.475 "strip_size_kb": 64, 00:14:29.475 "state": "online", 00:14:29.475 "raid_level": "concat", 00:14:29.475 "superblock": true, 00:14:29.475 "num_base_bdevs": 4, 00:14:29.475 "num_base_bdevs_discovered": 4, 00:14:29.475 "num_base_bdevs_operational": 4, 00:14:29.475 "base_bdevs_list": [ 00:14:29.475 { 00:14:29.475 "name": "BaseBdev1", 00:14:29.475 "uuid": "a07721ef-f89a-4867-ba31-f611fd61f12d", 00:14:29.475 "is_configured": true, 00:14:29.475 "data_offset": 2048, 00:14:29.475 "data_size": 63488 00:14:29.475 }, 00:14:29.475 { 00:14:29.475 "name": "BaseBdev2", 00:14:29.475 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:29.475 "is_configured": true, 00:14:29.475 "data_offset": 2048, 00:14:29.475 "data_size": 63488 00:14:29.475 }, 00:14:29.475 { 00:14:29.475 "name": "BaseBdev3", 00:14:29.475 "uuid": "2a5f52ec-9469-4f70-b01c-33e900285148", 00:14:29.475 "is_configured": true, 00:14:29.475 "data_offset": 2048, 00:14:29.475 "data_size": 63488 00:14:29.475 }, 00:14:29.475 { 00:14:29.475 "name": "BaseBdev4", 00:14:29.475 "uuid": "aaeb9ca4-88fb-4205-9540-69a80e3b9ff7", 00:14:29.475 "is_configured": true, 00:14:29.475 "data_offset": 2048, 00:14:29.475 "data_size": 63488 00:14:29.475 } 00:14:29.475 ] 00:14:29.475 } 00:14:29.475 } 00:14:29.475 }' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:29.475 BaseBdev2 00:14:29.475 BaseBdev3 00:14:29.475 BaseBdev4' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.475 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.733 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.734 [2024-11-20 07:11:11.798874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.734 [2024-11-20 07:11:11.798954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.734 [2024-11-20 07:11:11.799036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.734 "name": "Existed_Raid", 00:14:29.734 "uuid": "fadfd8f6-55fd-4918-9422-1a92fa97176a", 00:14:29.734 "strip_size_kb": 64, 00:14:29.734 "state": "offline", 00:14:29.734 "raid_level": "concat", 00:14:29.734 "superblock": true, 00:14:29.734 "num_base_bdevs": 4, 00:14:29.734 "num_base_bdevs_discovered": 3, 00:14:29.734 "num_base_bdevs_operational": 3, 00:14:29.734 "base_bdevs_list": [ 00:14:29.734 { 00:14:29.734 "name": null, 00:14:29.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.734 "is_configured": false, 00:14:29.734 "data_offset": 0, 00:14:29.734 "data_size": 63488 00:14:29.734 }, 00:14:29.734 { 00:14:29.734 "name": "BaseBdev2", 00:14:29.734 "uuid": "b8ac4d0b-96d7-4ab7-98e5-563f92152b0f", 00:14:29.734 "is_configured": true, 00:14:29.734 "data_offset": 2048, 00:14:29.734 "data_size": 63488 00:14:29.734 }, 00:14:29.734 { 00:14:29.734 "name": "BaseBdev3", 00:14:29.734 "uuid": "2a5f52ec-9469-4f70-b01c-33e900285148", 00:14:29.734 "is_configured": true, 00:14:29.734 "data_offset": 2048, 00:14:29.734 "data_size": 63488 00:14:29.734 }, 00:14:29.734 { 00:14:29.734 "name": "BaseBdev4", 00:14:29.734 "uuid": "aaeb9ca4-88fb-4205-9540-69a80e3b9ff7", 00:14:29.734 "is_configured": true, 00:14:29.734 "data_offset": 2048, 00:14:29.734 "data_size": 63488 00:14:29.734 } 00:14:29.734 ] 00:14:29.734 }' 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.734 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.300 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:30.300 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.300 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.300 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.300 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.301 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.301 [2024-11-20 07:11:12.458430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 [2024-11-20 07:11:12.623342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.559 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 [2024-11-20 07:11:12.789102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:30.559 [2024-11-20 07:11:12.789162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 BaseBdev2 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.818 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 [ 00:14:30.818 { 00:14:30.818 "name": "BaseBdev2", 00:14:30.818 "aliases": [ 00:14:30.818 "f32c7998-7270-4138-954d-10f3f7239161" 00:14:30.818 ], 00:14:30.818 "product_name": "Malloc disk", 00:14:30.818 "block_size": 512, 00:14:30.818 "num_blocks": 65536, 00:14:30.818 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:30.818 "assigned_rate_limits": { 00:14:30.818 "rw_ios_per_sec": 0, 00:14:30.818 "rw_mbytes_per_sec": 0, 00:14:30.818 "r_mbytes_per_sec": 0, 00:14:30.818 "w_mbytes_per_sec": 0 00:14:30.818 }, 00:14:30.818 "claimed": false, 00:14:30.818 "zoned": false, 00:14:30.818 "supported_io_types": { 00:14:30.818 "read": true, 00:14:30.818 "write": true, 00:14:30.818 "unmap": true, 00:14:30.818 "flush": true, 00:14:30.818 "reset": true, 00:14:30.818 "nvme_admin": false, 00:14:30.818 "nvme_io": false, 00:14:30.818 "nvme_io_md": false, 00:14:30.818 "write_zeroes": true, 00:14:30.818 "zcopy": true, 00:14:30.818 "get_zone_info": false, 00:14:30.819 "zone_management": false, 00:14:30.819 "zone_append": false, 00:14:30.819 "compare": false, 00:14:30.819 "compare_and_write": false, 00:14:30.819 "abort": true, 00:14:30.819 "seek_hole": false, 00:14:30.819 "seek_data": false, 00:14:30.819 "copy": true, 00:14:30.819 "nvme_iov_md": false 00:14:30.819 }, 00:14:30.819 "memory_domains": [ 00:14:30.819 { 00:14:30.819 "dma_device_id": "system", 00:14:30.819 "dma_device_type": 1 00:14:30.819 }, 00:14:30.819 { 00:14:30.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.819 "dma_device_type": 2 00:14:30.819 } 00:14:30.819 ], 00:14:30.819 "driver_specific": {} 00:14:30.819 } 00:14:30.819 ] 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.819 BaseBdev3 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.819 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 [ 00:14:31.078 { 00:14:31.078 "name": "BaseBdev3", 00:14:31.078 "aliases": [ 00:14:31.078 "85ad3a8f-74b8-45f3-9b53-20b22b520aec" 00:14:31.078 ], 00:14:31.078 "product_name": "Malloc disk", 00:14:31.078 "block_size": 512, 00:14:31.078 "num_blocks": 65536, 00:14:31.078 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:31.078 "assigned_rate_limits": { 00:14:31.078 "rw_ios_per_sec": 0, 00:14:31.078 "rw_mbytes_per_sec": 0, 00:14:31.078 "r_mbytes_per_sec": 0, 00:14:31.078 "w_mbytes_per_sec": 0 00:14:31.078 }, 00:14:31.078 "claimed": false, 00:14:31.078 "zoned": false, 00:14:31.078 "supported_io_types": { 00:14:31.078 "read": true, 00:14:31.078 "write": true, 00:14:31.078 "unmap": true, 00:14:31.078 "flush": true, 00:14:31.078 "reset": true, 00:14:31.078 "nvme_admin": false, 00:14:31.078 "nvme_io": false, 00:14:31.078 "nvme_io_md": false, 00:14:31.078 "write_zeroes": true, 00:14:31.078 "zcopy": true, 00:14:31.078 "get_zone_info": false, 00:14:31.078 "zone_management": false, 00:14:31.078 "zone_append": false, 00:14:31.078 "compare": false, 00:14:31.078 "compare_and_write": false, 00:14:31.078 "abort": true, 00:14:31.078 "seek_hole": false, 00:14:31.078 "seek_data": false, 00:14:31.078 "copy": true, 00:14:31.078 "nvme_iov_md": false 00:14:31.078 }, 00:14:31.078 "memory_domains": [ 00:14:31.078 { 00:14:31.078 "dma_device_id": "system", 00:14:31.078 "dma_device_type": 1 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.078 "dma_device_type": 2 00:14:31.078 } 00:14:31.078 ], 00:14:31.078 "driver_specific": {} 00:14:31.078 } 00:14:31.078 ] 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.078 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.079 BaseBdev4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.079 [ 00:14:31.079 { 00:14:31.079 "name": "BaseBdev4", 00:14:31.079 "aliases": [ 00:14:31.079 "ebd22d5c-6609-462d-9618-f4b037f5ab83" 00:14:31.079 ], 00:14:31.079 "product_name": "Malloc disk", 00:14:31.079 "block_size": 512, 00:14:31.079 "num_blocks": 65536, 00:14:31.079 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:31.079 "assigned_rate_limits": { 00:14:31.079 "rw_ios_per_sec": 0, 00:14:31.079 "rw_mbytes_per_sec": 0, 00:14:31.079 "r_mbytes_per_sec": 0, 00:14:31.079 "w_mbytes_per_sec": 0 00:14:31.079 }, 00:14:31.079 "claimed": false, 00:14:31.079 "zoned": false, 00:14:31.079 "supported_io_types": { 00:14:31.079 "read": true, 00:14:31.079 "write": true, 00:14:31.079 "unmap": true, 00:14:31.079 "flush": true, 00:14:31.079 "reset": true, 00:14:31.079 "nvme_admin": false, 00:14:31.079 "nvme_io": false, 00:14:31.079 "nvme_io_md": false, 00:14:31.079 "write_zeroes": true, 00:14:31.079 "zcopy": true, 00:14:31.079 "get_zone_info": false, 00:14:31.079 "zone_management": false, 00:14:31.079 "zone_append": false, 00:14:31.079 "compare": false, 00:14:31.079 "compare_and_write": false, 00:14:31.079 "abort": true, 00:14:31.079 "seek_hole": false, 00:14:31.079 "seek_data": false, 00:14:31.079 "copy": true, 00:14:31.079 "nvme_iov_md": false 00:14:31.079 }, 00:14:31.079 "memory_domains": [ 00:14:31.079 { 00:14:31.079 "dma_device_id": "system", 00:14:31.079 "dma_device_type": 1 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.079 "dma_device_type": 2 00:14:31.079 } 00:14:31.079 ], 00:14:31.079 "driver_specific": {} 00:14:31.079 } 00:14:31.079 ] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.079 [2024-11-20 07:11:13.194645] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.079 [2024-11-20 07:11:13.194754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.079 [2024-11-20 07:11:13.194821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.079 [2024-11-20 07:11:13.196903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.079 [2024-11-20 07:11:13.197023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.079 "name": "Existed_Raid", 00:14:31.079 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:31.079 "strip_size_kb": 64, 00:14:31.079 "state": "configuring", 00:14:31.079 "raid_level": "concat", 00:14:31.079 "superblock": true, 00:14:31.079 "num_base_bdevs": 4, 00:14:31.079 "num_base_bdevs_discovered": 3, 00:14:31.079 "num_base_bdevs_operational": 4, 00:14:31.079 "base_bdevs_list": [ 00:14:31.079 { 00:14:31.079 "name": "BaseBdev1", 00:14:31.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.079 "is_configured": false, 00:14:31.079 "data_offset": 0, 00:14:31.079 "data_size": 0 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "name": "BaseBdev2", 00:14:31.079 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:31.079 "is_configured": true, 00:14:31.079 "data_offset": 2048, 00:14:31.079 "data_size": 63488 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "name": "BaseBdev3", 00:14:31.079 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:31.079 "is_configured": true, 00:14:31.079 "data_offset": 2048, 00:14:31.079 "data_size": 63488 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "name": "BaseBdev4", 00:14:31.079 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:31.079 "is_configured": true, 00:14:31.079 "data_offset": 2048, 00:14:31.079 "data_size": 63488 00:14:31.079 } 00:14:31.079 ] 00:14:31.079 }' 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.079 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 [2024-11-20 07:11:13.613967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.648 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.648 "name": "Existed_Raid", 00:14:31.648 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:31.648 "strip_size_kb": 64, 00:14:31.648 "state": "configuring", 00:14:31.648 "raid_level": "concat", 00:14:31.648 "superblock": true, 00:14:31.648 "num_base_bdevs": 4, 00:14:31.648 "num_base_bdevs_discovered": 2, 00:14:31.648 "num_base_bdevs_operational": 4, 00:14:31.648 "base_bdevs_list": [ 00:14:31.648 { 00:14:31.648 "name": "BaseBdev1", 00:14:31.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.648 "is_configured": false, 00:14:31.648 "data_offset": 0, 00:14:31.648 "data_size": 0 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": null, 00:14:31.648 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:31.648 "is_configured": false, 00:14:31.648 "data_offset": 0, 00:14:31.648 "data_size": 63488 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": "BaseBdev3", 00:14:31.648 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:31.648 "is_configured": true, 00:14:31.648 "data_offset": 2048, 00:14:31.648 "data_size": 63488 00:14:31.648 }, 00:14:31.648 { 00:14:31.648 "name": "BaseBdev4", 00:14:31.648 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:31.648 "is_configured": true, 00:14:31.648 "data_offset": 2048, 00:14:31.648 "data_size": 63488 00:14:31.648 } 00:14:31.648 ] 00:14:31.648 }' 00:14:31.649 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.649 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.908 [2024-11-20 07:11:14.126124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.908 BaseBdev1 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:31.908 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 [ 00:14:31.909 { 00:14:31.909 "name": "BaseBdev1", 00:14:31.909 "aliases": [ 00:14:31.909 "d146e090-d70b-4b4e-9a63-51e174cd32d1" 00:14:31.909 ], 00:14:31.909 "product_name": "Malloc disk", 00:14:31.909 "block_size": 512, 00:14:31.909 "num_blocks": 65536, 00:14:31.909 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:31.909 "assigned_rate_limits": { 00:14:31.909 "rw_ios_per_sec": 0, 00:14:31.909 "rw_mbytes_per_sec": 0, 00:14:31.909 "r_mbytes_per_sec": 0, 00:14:31.909 "w_mbytes_per_sec": 0 00:14:31.909 }, 00:14:31.909 "claimed": true, 00:14:31.909 "claim_type": "exclusive_write", 00:14:31.909 "zoned": false, 00:14:31.909 "supported_io_types": { 00:14:31.909 "read": true, 00:14:31.909 "write": true, 00:14:31.909 "unmap": true, 00:14:31.909 "flush": true, 00:14:31.909 "reset": true, 00:14:31.909 "nvme_admin": false, 00:14:31.909 "nvme_io": false, 00:14:31.909 "nvme_io_md": false, 00:14:31.909 "write_zeroes": true, 00:14:31.909 "zcopy": true, 00:14:31.909 "get_zone_info": false, 00:14:31.909 "zone_management": false, 00:14:31.909 "zone_append": false, 00:14:31.909 "compare": false, 00:14:31.909 "compare_and_write": false, 00:14:31.909 "abort": true, 00:14:31.909 "seek_hole": false, 00:14:31.909 "seek_data": false, 00:14:31.909 "copy": true, 00:14:31.909 "nvme_iov_md": false 00:14:31.909 }, 00:14:31.909 "memory_domains": [ 00:14:31.909 { 00:14:31.909 "dma_device_id": "system", 00:14:31.909 "dma_device_type": 1 00:14:31.909 }, 00:14:31.909 { 00:14:31.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.909 "dma_device_type": 2 00:14:31.909 } 00:14:31.909 ], 00:14:31.909 "driver_specific": {} 00:14:31.909 } 00:14:31.909 ] 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.909 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.168 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.168 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.168 "name": "Existed_Raid", 00:14:32.168 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:32.168 "strip_size_kb": 64, 00:14:32.168 "state": "configuring", 00:14:32.168 "raid_level": "concat", 00:14:32.168 "superblock": true, 00:14:32.168 "num_base_bdevs": 4, 00:14:32.168 "num_base_bdevs_discovered": 3, 00:14:32.168 "num_base_bdevs_operational": 4, 00:14:32.168 "base_bdevs_list": [ 00:14:32.168 { 00:14:32.168 "name": "BaseBdev1", 00:14:32.168 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": null, 00:14:32.168 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:32.168 "is_configured": false, 00:14:32.168 "data_offset": 0, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": "BaseBdev3", 00:14:32.168 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": "BaseBdev4", 00:14:32.168 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 } 00:14:32.168 ] 00:14:32.168 }' 00:14:32.168 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.168 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 [2024-11-20 07:11:14.645333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.427 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.687 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.687 "name": "Existed_Raid", 00:14:32.687 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:32.687 "strip_size_kb": 64, 00:14:32.687 "state": "configuring", 00:14:32.687 "raid_level": "concat", 00:14:32.687 "superblock": true, 00:14:32.687 "num_base_bdevs": 4, 00:14:32.687 "num_base_bdevs_discovered": 2, 00:14:32.687 "num_base_bdevs_operational": 4, 00:14:32.687 "base_bdevs_list": [ 00:14:32.687 { 00:14:32.687 "name": "BaseBdev1", 00:14:32.687 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:32.687 "is_configured": true, 00:14:32.687 "data_offset": 2048, 00:14:32.687 "data_size": 63488 00:14:32.687 }, 00:14:32.687 { 00:14:32.687 "name": null, 00:14:32.687 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:32.687 "is_configured": false, 00:14:32.687 "data_offset": 0, 00:14:32.687 "data_size": 63488 00:14:32.687 }, 00:14:32.687 { 00:14:32.687 "name": null, 00:14:32.687 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:32.687 "is_configured": false, 00:14:32.687 "data_offset": 0, 00:14:32.687 "data_size": 63488 00:14:32.687 }, 00:14:32.687 { 00:14:32.687 "name": "BaseBdev4", 00:14:32.687 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:32.687 "is_configured": true, 00:14:32.687 "data_offset": 2048, 00:14:32.687 "data_size": 63488 00:14:32.687 } 00:14:32.687 ] 00:14:32.687 }' 00:14:32.687 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.687 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.946 [2024-11-20 07:11:15.164516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.946 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.947 "name": "Existed_Raid", 00:14:32.947 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:32.947 "strip_size_kb": 64, 00:14:32.947 "state": "configuring", 00:14:32.947 "raid_level": "concat", 00:14:32.947 "superblock": true, 00:14:32.947 "num_base_bdevs": 4, 00:14:32.947 "num_base_bdevs_discovered": 3, 00:14:32.947 "num_base_bdevs_operational": 4, 00:14:32.947 "base_bdevs_list": [ 00:14:32.947 { 00:14:32.947 "name": "BaseBdev1", 00:14:32.947 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:32.947 "is_configured": true, 00:14:32.947 "data_offset": 2048, 00:14:32.947 "data_size": 63488 00:14:32.947 }, 00:14:32.947 { 00:14:32.947 "name": null, 00:14:32.947 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:32.947 "is_configured": false, 00:14:32.947 "data_offset": 0, 00:14:32.947 "data_size": 63488 00:14:32.947 }, 00:14:32.947 { 00:14:32.947 "name": "BaseBdev3", 00:14:32.947 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:32.947 "is_configured": true, 00:14:32.947 "data_offset": 2048, 00:14:32.947 "data_size": 63488 00:14:32.947 }, 00:14:32.947 { 00:14:32.947 "name": "BaseBdev4", 00:14:32.947 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:32.947 "is_configured": true, 00:14:32.947 "data_offset": 2048, 00:14:32.947 "data_size": 63488 00:14:32.947 } 00:14:32.947 ] 00:14:32.947 }' 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.947 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.514 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.514 [2024-11-20 07:11:15.691657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.774 "name": "Existed_Raid", 00:14:33.774 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:33.774 "strip_size_kb": 64, 00:14:33.774 "state": "configuring", 00:14:33.774 "raid_level": "concat", 00:14:33.774 "superblock": true, 00:14:33.774 "num_base_bdevs": 4, 00:14:33.774 "num_base_bdevs_discovered": 2, 00:14:33.774 "num_base_bdevs_operational": 4, 00:14:33.774 "base_bdevs_list": [ 00:14:33.774 { 00:14:33.774 "name": null, 00:14:33.774 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:33.774 "is_configured": false, 00:14:33.774 "data_offset": 0, 00:14:33.774 "data_size": 63488 00:14:33.774 }, 00:14:33.774 { 00:14:33.774 "name": null, 00:14:33.774 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:33.774 "is_configured": false, 00:14:33.774 "data_offset": 0, 00:14:33.774 "data_size": 63488 00:14:33.774 }, 00:14:33.774 { 00:14:33.774 "name": "BaseBdev3", 00:14:33.774 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:33.774 "is_configured": true, 00:14:33.774 "data_offset": 2048, 00:14:33.774 "data_size": 63488 00:14:33.774 }, 00:14:33.774 { 00:14:33.774 "name": "BaseBdev4", 00:14:33.774 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:33.774 "is_configured": true, 00:14:33.774 "data_offset": 2048, 00:14:33.774 "data_size": 63488 00:14:33.774 } 00:14:33.774 ] 00:14:33.774 }' 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.774 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.034 [2024-11-20 07:11:16.275490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.034 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.294 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.294 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.294 "name": "Existed_Raid", 00:14:34.294 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:34.294 "strip_size_kb": 64, 00:14:34.294 "state": "configuring", 00:14:34.294 "raid_level": "concat", 00:14:34.294 "superblock": true, 00:14:34.294 "num_base_bdevs": 4, 00:14:34.294 "num_base_bdevs_discovered": 3, 00:14:34.294 "num_base_bdevs_operational": 4, 00:14:34.294 "base_bdevs_list": [ 00:14:34.294 { 00:14:34.294 "name": null, 00:14:34.294 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:34.294 "is_configured": false, 00:14:34.294 "data_offset": 0, 00:14:34.294 "data_size": 63488 00:14:34.294 }, 00:14:34.294 { 00:14:34.294 "name": "BaseBdev2", 00:14:34.294 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:34.294 "is_configured": true, 00:14:34.294 "data_offset": 2048, 00:14:34.294 "data_size": 63488 00:14:34.294 }, 00:14:34.294 { 00:14:34.294 "name": "BaseBdev3", 00:14:34.294 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:34.294 "is_configured": true, 00:14:34.294 "data_offset": 2048, 00:14:34.294 "data_size": 63488 00:14:34.294 }, 00:14:34.294 { 00:14:34.294 "name": "BaseBdev4", 00:14:34.294 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:34.294 "is_configured": true, 00:14:34.294 "data_offset": 2048, 00:14:34.294 "data_size": 63488 00:14:34.294 } 00:14:34.294 ] 00:14:34.294 }' 00:14:34.294 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.294 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d146e090-d70b-4b4e-9a63-51e174cd32d1 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.553 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.812 [2024-11-20 07:11:16.854018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:34.812 [2024-11-20 07:11:16.854372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:34.812 [2024-11-20 07:11:16.854428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:34.812 [2024-11-20 07:11:16.854749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:34.812 [2024-11-20 07:11:16.854959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:34.812 NewBaseBdev 00:14:34.812 [2024-11-20 07:11:16.855020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:34.812 [2024-11-20 07:11:16.855230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.812 [ 00:14:34.812 { 00:14:34.812 "name": "NewBaseBdev", 00:14:34.812 "aliases": [ 00:14:34.812 "d146e090-d70b-4b4e-9a63-51e174cd32d1" 00:14:34.812 ], 00:14:34.812 "product_name": "Malloc disk", 00:14:34.812 "block_size": 512, 00:14:34.812 "num_blocks": 65536, 00:14:34.812 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:34.812 "assigned_rate_limits": { 00:14:34.812 "rw_ios_per_sec": 0, 00:14:34.812 "rw_mbytes_per_sec": 0, 00:14:34.812 "r_mbytes_per_sec": 0, 00:14:34.812 "w_mbytes_per_sec": 0 00:14:34.812 }, 00:14:34.812 "claimed": true, 00:14:34.812 "claim_type": "exclusive_write", 00:14:34.812 "zoned": false, 00:14:34.812 "supported_io_types": { 00:14:34.812 "read": true, 00:14:34.812 "write": true, 00:14:34.812 "unmap": true, 00:14:34.812 "flush": true, 00:14:34.812 "reset": true, 00:14:34.812 "nvme_admin": false, 00:14:34.812 "nvme_io": false, 00:14:34.812 "nvme_io_md": false, 00:14:34.812 "write_zeroes": true, 00:14:34.812 "zcopy": true, 00:14:34.812 "get_zone_info": false, 00:14:34.812 "zone_management": false, 00:14:34.812 "zone_append": false, 00:14:34.812 "compare": false, 00:14:34.812 "compare_and_write": false, 00:14:34.812 "abort": true, 00:14:34.812 "seek_hole": false, 00:14:34.812 "seek_data": false, 00:14:34.812 "copy": true, 00:14:34.812 "nvme_iov_md": false 00:14:34.812 }, 00:14:34.812 "memory_domains": [ 00:14:34.812 { 00:14:34.812 "dma_device_id": "system", 00:14:34.812 "dma_device_type": 1 00:14:34.812 }, 00:14:34.812 { 00:14:34.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.812 "dma_device_type": 2 00:14:34.812 } 00:14:34.812 ], 00:14:34.812 "driver_specific": {} 00:14:34.812 } 00:14:34.812 ] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.812 "name": "Existed_Raid", 00:14:34.812 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:34.812 "strip_size_kb": 64, 00:14:34.812 "state": "online", 00:14:34.812 "raid_level": "concat", 00:14:34.812 "superblock": true, 00:14:34.812 "num_base_bdevs": 4, 00:14:34.812 "num_base_bdevs_discovered": 4, 00:14:34.812 "num_base_bdevs_operational": 4, 00:14:34.812 "base_bdevs_list": [ 00:14:34.812 { 00:14:34.812 "name": "NewBaseBdev", 00:14:34.812 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:34.812 "is_configured": true, 00:14:34.812 "data_offset": 2048, 00:14:34.812 "data_size": 63488 00:14:34.812 }, 00:14:34.812 { 00:14:34.812 "name": "BaseBdev2", 00:14:34.812 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:34.812 "is_configured": true, 00:14:34.812 "data_offset": 2048, 00:14:34.812 "data_size": 63488 00:14:34.812 }, 00:14:34.812 { 00:14:34.812 "name": "BaseBdev3", 00:14:34.812 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:34.812 "is_configured": true, 00:14:34.812 "data_offset": 2048, 00:14:34.812 "data_size": 63488 00:14:34.812 }, 00:14:34.812 { 00:14:34.812 "name": "BaseBdev4", 00:14:34.812 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:34.812 "is_configured": true, 00:14:34.812 "data_offset": 2048, 00:14:34.812 "data_size": 63488 00:14:34.812 } 00:14:34.812 ] 00:14:34.812 }' 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.812 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.380 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.381 [2024-11-20 07:11:17.361657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.381 "name": "Existed_Raid", 00:14:35.381 "aliases": [ 00:14:35.381 "07bd0cc0-6e4b-4255-882b-c157f3c12199" 00:14:35.381 ], 00:14:35.381 "product_name": "Raid Volume", 00:14:35.381 "block_size": 512, 00:14:35.381 "num_blocks": 253952, 00:14:35.381 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:35.381 "assigned_rate_limits": { 00:14:35.381 "rw_ios_per_sec": 0, 00:14:35.381 "rw_mbytes_per_sec": 0, 00:14:35.381 "r_mbytes_per_sec": 0, 00:14:35.381 "w_mbytes_per_sec": 0 00:14:35.381 }, 00:14:35.381 "claimed": false, 00:14:35.381 "zoned": false, 00:14:35.381 "supported_io_types": { 00:14:35.381 "read": true, 00:14:35.381 "write": true, 00:14:35.381 "unmap": true, 00:14:35.381 "flush": true, 00:14:35.381 "reset": true, 00:14:35.381 "nvme_admin": false, 00:14:35.381 "nvme_io": false, 00:14:35.381 "nvme_io_md": false, 00:14:35.381 "write_zeroes": true, 00:14:35.381 "zcopy": false, 00:14:35.381 "get_zone_info": false, 00:14:35.381 "zone_management": false, 00:14:35.381 "zone_append": false, 00:14:35.381 "compare": false, 00:14:35.381 "compare_and_write": false, 00:14:35.381 "abort": false, 00:14:35.381 "seek_hole": false, 00:14:35.381 "seek_data": false, 00:14:35.381 "copy": false, 00:14:35.381 "nvme_iov_md": false 00:14:35.381 }, 00:14:35.381 "memory_domains": [ 00:14:35.381 { 00:14:35.381 "dma_device_id": "system", 00:14:35.381 "dma_device_type": 1 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.381 "dma_device_type": 2 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "system", 00:14:35.381 "dma_device_type": 1 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.381 "dma_device_type": 2 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "system", 00:14:35.381 "dma_device_type": 1 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.381 "dma_device_type": 2 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "system", 00:14:35.381 "dma_device_type": 1 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.381 "dma_device_type": 2 00:14:35.381 } 00:14:35.381 ], 00:14:35.381 "driver_specific": { 00:14:35.381 "raid": { 00:14:35.381 "uuid": "07bd0cc0-6e4b-4255-882b-c157f3c12199", 00:14:35.381 "strip_size_kb": 64, 00:14:35.381 "state": "online", 00:14:35.381 "raid_level": "concat", 00:14:35.381 "superblock": true, 00:14:35.381 "num_base_bdevs": 4, 00:14:35.381 "num_base_bdevs_discovered": 4, 00:14:35.381 "num_base_bdevs_operational": 4, 00:14:35.381 "base_bdevs_list": [ 00:14:35.381 { 00:14:35.381 "name": "NewBaseBdev", 00:14:35.381 "uuid": "d146e090-d70b-4b4e-9a63-51e174cd32d1", 00:14:35.381 "is_configured": true, 00:14:35.381 "data_offset": 2048, 00:14:35.381 "data_size": 63488 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "name": "BaseBdev2", 00:14:35.381 "uuid": "f32c7998-7270-4138-954d-10f3f7239161", 00:14:35.381 "is_configured": true, 00:14:35.381 "data_offset": 2048, 00:14:35.381 "data_size": 63488 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "name": "BaseBdev3", 00:14:35.381 "uuid": "85ad3a8f-74b8-45f3-9b53-20b22b520aec", 00:14:35.381 "is_configured": true, 00:14:35.381 "data_offset": 2048, 00:14:35.381 "data_size": 63488 00:14:35.381 }, 00:14:35.381 { 00:14:35.381 "name": "BaseBdev4", 00:14:35.381 "uuid": "ebd22d5c-6609-462d-9618-f4b037f5ab83", 00:14:35.381 "is_configured": true, 00:14:35.381 "data_offset": 2048, 00:14:35.381 "data_size": 63488 00:14:35.381 } 00:14:35.381 ] 00:14:35.381 } 00:14:35.381 } 00:14:35.381 }' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:35.381 BaseBdev2 00:14:35.381 BaseBdev3 00:14:35.381 BaseBdev4' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.381 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.382 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.382 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.382 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 [2024-11-20 07:11:17.728764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.641 [2024-11-20 07:11:17.728861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.641 [2024-11-20 07:11:17.728967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.641 [2024-11-20 07:11:17.729061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.641 [2024-11-20 07:11:17.729073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72280 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72280 ']' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72280 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72280 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.641 killing process with pid 72280 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72280' 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72280 00:14:35.641 [2024-11-20 07:11:17.766377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.641 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72280 00:14:36.209 [2024-11-20 07:11:18.223829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.592 07:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:37.592 00:14:37.592 real 0m12.008s 00:14:37.592 user 0m19.021s 00:14:37.592 sys 0m2.043s 00:14:37.592 07:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.592 ************************************ 00:14:37.592 END TEST raid_state_function_test_sb 00:14:37.592 ************************************ 00:14:37.592 07:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.592 07:11:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:37.592 07:11:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:37.592 07:11:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.592 07:11:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.592 ************************************ 00:14:37.592 START TEST raid_superblock_test 00:14:37.592 ************************************ 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72956 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72956 00:14:37.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.592 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.592 [2024-11-20 07:11:19.644506] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:37.592 [2024-11-20 07:11:19.644672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:14:37.592 [2024-11-20 07:11:19.809733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.851 [2024-11-20 07:11:19.934844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.110 [2024-11-20 07:11:20.153225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.110 [2024-11-20 07:11:20.153264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.369 malloc1 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.369 [2024-11-20 07:11:20.562745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.369 [2024-11-20 07:11:20.562862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.369 [2024-11-20 07:11:20.562921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:38.369 [2024-11-20 07:11:20.562959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.369 [2024-11-20 07:11:20.566012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.369 [2024-11-20 07:11:20.566385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.369 pt1 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.369 malloc2 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.369 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 [2024-11-20 07:11:20.637328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.629 [2024-11-20 07:11:20.637485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.629 [2024-11-20 07:11:20.637539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:38.629 [2024-11-20 07:11:20.637574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.629 [2024-11-20 07:11:20.640029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.629 [2024-11-20 07:11:20.640103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.629 pt2 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 malloc3 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 [2024-11-20 07:11:20.721140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.629 [2024-11-20 07:11:20.721318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.629 [2024-11-20 07:11:20.721385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:38.629 [2024-11-20 07:11:20.721440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.629 [2024-11-20 07:11:20.724275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.629 [2024-11-20 07:11:20.724367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.629 pt3 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 malloc4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 [2024-11-20 07:11:20.788182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:38.629 [2024-11-20 07:11:20.788340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.629 [2024-11-20 07:11:20.788381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:38.629 [2024-11-20 07:11:20.788429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.629 [2024-11-20 07:11:20.790975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.629 [2024-11-20 07:11:20.791052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:38.629 pt4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 [2024-11-20 07:11:20.800204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.629 [2024-11-20 07:11:20.802702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.629 [2024-11-20 07:11:20.802842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.629 [2024-11-20 07:11:20.802940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:38.629 [2024-11-20 07:11:20.803207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:38.629 [2024-11-20 07:11:20.803261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:38.629 [2024-11-20 07:11:20.803606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:38.629 [2024-11-20 07:11:20.803846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:38.629 [2024-11-20 07:11:20.803894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:38.629 [2024-11-20 07:11:20.804154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.629 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.630 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.630 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.630 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.630 "name": "raid_bdev1", 00:14:38.630 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:38.630 "strip_size_kb": 64, 00:14:38.630 "state": "online", 00:14:38.630 "raid_level": "concat", 00:14:38.630 "superblock": true, 00:14:38.630 "num_base_bdevs": 4, 00:14:38.630 "num_base_bdevs_discovered": 4, 00:14:38.630 "num_base_bdevs_operational": 4, 00:14:38.630 "base_bdevs_list": [ 00:14:38.630 { 00:14:38.630 "name": "pt1", 00:14:38.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.630 "is_configured": true, 00:14:38.630 "data_offset": 2048, 00:14:38.630 "data_size": 63488 00:14:38.630 }, 00:14:38.630 { 00:14:38.630 "name": "pt2", 00:14:38.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.630 "is_configured": true, 00:14:38.630 "data_offset": 2048, 00:14:38.630 "data_size": 63488 00:14:38.630 }, 00:14:38.630 { 00:14:38.630 "name": "pt3", 00:14:38.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.630 "is_configured": true, 00:14:38.630 "data_offset": 2048, 00:14:38.630 "data_size": 63488 00:14:38.630 }, 00:14:38.630 { 00:14:38.630 "name": "pt4", 00:14:38.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.630 "is_configured": true, 00:14:38.630 "data_offset": 2048, 00:14:38.630 "data_size": 63488 00:14:38.630 } 00:14:38.630 ] 00:14:38.630 }' 00:14:38.630 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.630 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.201 [2024-11-20 07:11:21.311774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.201 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.201 "name": "raid_bdev1", 00:14:39.201 "aliases": [ 00:14:39.201 "af9abb04-8083-4f12-8140-9ce45153095b" 00:14:39.201 ], 00:14:39.201 "product_name": "Raid Volume", 00:14:39.201 "block_size": 512, 00:14:39.201 "num_blocks": 253952, 00:14:39.201 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:39.201 "assigned_rate_limits": { 00:14:39.201 "rw_ios_per_sec": 0, 00:14:39.201 "rw_mbytes_per_sec": 0, 00:14:39.201 "r_mbytes_per_sec": 0, 00:14:39.201 "w_mbytes_per_sec": 0 00:14:39.201 }, 00:14:39.201 "claimed": false, 00:14:39.201 "zoned": false, 00:14:39.201 "supported_io_types": { 00:14:39.201 "read": true, 00:14:39.201 "write": true, 00:14:39.201 "unmap": true, 00:14:39.201 "flush": true, 00:14:39.201 "reset": true, 00:14:39.201 "nvme_admin": false, 00:14:39.201 "nvme_io": false, 00:14:39.201 "nvme_io_md": false, 00:14:39.201 "write_zeroes": true, 00:14:39.201 "zcopy": false, 00:14:39.201 "get_zone_info": false, 00:14:39.201 "zone_management": false, 00:14:39.201 "zone_append": false, 00:14:39.201 "compare": false, 00:14:39.201 "compare_and_write": false, 00:14:39.201 "abort": false, 00:14:39.201 "seek_hole": false, 00:14:39.201 "seek_data": false, 00:14:39.201 "copy": false, 00:14:39.201 "nvme_iov_md": false 00:14:39.201 }, 00:14:39.201 "memory_domains": [ 00:14:39.202 { 00:14:39.202 "dma_device_id": "system", 00:14:39.202 "dma_device_type": 1 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.202 "dma_device_type": 2 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "system", 00:14:39.202 "dma_device_type": 1 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.202 "dma_device_type": 2 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "system", 00:14:39.202 "dma_device_type": 1 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.202 "dma_device_type": 2 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "system", 00:14:39.202 "dma_device_type": 1 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.202 "dma_device_type": 2 00:14:39.202 } 00:14:39.202 ], 00:14:39.202 "driver_specific": { 00:14:39.202 "raid": { 00:14:39.202 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:39.202 "strip_size_kb": 64, 00:14:39.202 "state": "online", 00:14:39.202 "raid_level": "concat", 00:14:39.202 "superblock": true, 00:14:39.202 "num_base_bdevs": 4, 00:14:39.202 "num_base_bdevs_discovered": 4, 00:14:39.202 "num_base_bdevs_operational": 4, 00:14:39.202 "base_bdevs_list": [ 00:14:39.202 { 00:14:39.202 "name": "pt1", 00:14:39.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.202 "is_configured": true, 00:14:39.202 "data_offset": 2048, 00:14:39.202 "data_size": 63488 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "name": "pt2", 00:14:39.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.202 "is_configured": true, 00:14:39.202 "data_offset": 2048, 00:14:39.202 "data_size": 63488 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "name": "pt3", 00:14:39.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.202 "is_configured": true, 00:14:39.202 "data_offset": 2048, 00:14:39.202 "data_size": 63488 00:14:39.202 }, 00:14:39.202 { 00:14:39.202 "name": "pt4", 00:14:39.202 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.202 "is_configured": true, 00:14:39.202 "data_offset": 2048, 00:14:39.202 "data_size": 63488 00:14:39.202 } 00:14:39.202 ] 00:14:39.202 } 00:14:39.202 } 00:14:39.202 }' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.202 pt2 00:14:39.202 pt3 00:14:39.202 pt4' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.202 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:39.461 [2024-11-20 07:11:21.659161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=af9abb04-8083-4f12-8140-9ce45153095b 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z af9abb04-8083-4f12-8140-9ce45153095b ']' 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.461 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.461 [2024-11-20 07:11:21.690748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.461 [2024-11-20 07:11:21.690874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.462 [2024-11-20 07:11:21.691015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.462 [2024-11-20 07:11:21.691137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.462 [2024-11-20 07:11:21.691206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.462 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.720 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:39.720 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:39.720 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.720 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 [2024-11-20 07:11:21.854593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:39.721 [2024-11-20 07:11:21.857609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:39.721 [2024-11-20 07:11:21.857739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:39.721 [2024-11-20 07:11:21.857803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:39.721 [2024-11-20 07:11:21.857909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:39.721 [2024-11-20 07:11:21.858045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:39.721 [2024-11-20 07:11:21.858149] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raidrequest: 00:14:39.721 { 00:14:39.721 "name": "raid_bdev1", 00:14:39.721 bdev found on bdev malloc3 00:14:39.721 [2024-11-20 07:11:21.858232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:39.721 [2024-11-20 07:11:21.858250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.721 [2024-11-20 07:11:21.858264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:39.721 "raid_level": "concat", 00:14:39.721 "base_bdevs": [ 00:14:39.721 "malloc1", 00:14:39.721 "malloc2", 00:14:39.721 "malloc3", 00:14:39.721 "malloc4" 00:14:39.721 ], 00:14:39.721 "strip_size_kb": 64, 00:14:39.721 "superblock": false, 00:14:39.721 "method": "bdev_raid_create", 00:14:39.721 "req_id": 1 00:14:39.721 } 00:14:39.721 Got JSON-RPC error response 00:14:39.721 response: 00:14:39.721 { 00:14:39.721 "code": -17, 00:14:39.721 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:39.721 } 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 [2024-11-20 07:11:21.918471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.721 [2024-11-20 07:11:21.918636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.721 [2024-11-20 07:11:21.918689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:39.721 [2024-11-20 07:11:21.918732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.721 [2024-11-20 07:11:21.921468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.721 [2024-11-20 07:11:21.921549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.721 [2024-11-20 07:11:21.921671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:39.721 [2024-11-20 07:11:21.921771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.721 pt1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.721 "name": "raid_bdev1", 00:14:39.722 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:39.722 "strip_size_kb": 64, 00:14:39.722 "state": "configuring", 00:14:39.722 "raid_level": "concat", 00:14:39.722 "superblock": true, 00:14:39.722 "num_base_bdevs": 4, 00:14:39.722 "num_base_bdevs_discovered": 1, 00:14:39.722 "num_base_bdevs_operational": 4, 00:14:39.722 "base_bdevs_list": [ 00:14:39.722 { 00:14:39.722 "name": "pt1", 00:14:39.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.722 "is_configured": true, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 }, 00:14:39.722 { 00:14:39.722 "name": null, 00:14:39.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.722 "is_configured": false, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 }, 00:14:39.722 { 00:14:39.722 "name": null, 00:14:39.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.722 "is_configured": false, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 }, 00:14:39.722 { 00:14:39.722 "name": null, 00:14:39.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.722 "is_configured": false, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 } 00:14:39.722 ] 00:14:39.722 }' 00:14:39.722 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.722 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 [2024-11-20 07:11:22.413635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.288 [2024-11-20 07:11:22.413841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.288 [2024-11-20 07:11:22.413871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:40.288 [2024-11-20 07:11:22.413884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.288 [2024-11-20 07:11:22.414460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.288 [2024-11-20 07:11:22.414484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.288 [2024-11-20 07:11:22.414603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.288 [2024-11-20 07:11:22.414633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.288 pt2 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 [2024-11-20 07:11:22.425646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.288 "name": "raid_bdev1", 00:14:40.288 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:40.288 "strip_size_kb": 64, 00:14:40.288 "state": "configuring", 00:14:40.288 "raid_level": "concat", 00:14:40.288 "superblock": true, 00:14:40.288 "num_base_bdevs": 4, 00:14:40.288 "num_base_bdevs_discovered": 1, 00:14:40.288 "num_base_bdevs_operational": 4, 00:14:40.288 "base_bdevs_list": [ 00:14:40.288 { 00:14:40.288 "name": "pt1", 00:14:40.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.288 "is_configured": true, 00:14:40.288 "data_offset": 2048, 00:14:40.288 "data_size": 63488 00:14:40.288 }, 00:14:40.288 { 00:14:40.288 "name": null, 00:14:40.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.288 "is_configured": false, 00:14:40.288 "data_offset": 0, 00:14:40.288 "data_size": 63488 00:14:40.288 }, 00:14:40.288 { 00:14:40.288 "name": null, 00:14:40.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.288 "is_configured": false, 00:14:40.288 "data_offset": 2048, 00:14:40.288 "data_size": 63488 00:14:40.288 }, 00:14:40.288 { 00:14:40.288 "name": null, 00:14:40.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.288 "is_configured": false, 00:14:40.288 "data_offset": 2048, 00:14:40.288 "data_size": 63488 00:14:40.288 } 00:14:40.288 ] 00:14:40.288 }' 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.288 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.864 [2024-11-20 07:11:22.872915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.864 [2024-11-20 07:11:22.873113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.864 [2024-11-20 07:11:22.873160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:40.864 [2024-11-20 07:11:22.873240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.864 [2024-11-20 07:11:22.873843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.864 [2024-11-20 07:11:22.873907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.864 [2024-11-20 07:11:22.874048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.864 [2024-11-20 07:11:22.874105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.864 pt2 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.864 [2024-11-20 07:11:22.884804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.864 [2024-11-20 07:11:22.884898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.864 [2024-11-20 07:11:22.884953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:40.864 [2024-11-20 07:11:22.884994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.864 [2024-11-20 07:11:22.885511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.864 [2024-11-20 07:11:22.885581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.864 [2024-11-20 07:11:22.885693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.864 [2024-11-20 07:11:22.885747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.864 pt3 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.864 [2024-11-20 07:11:22.896754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:40.864 [2024-11-20 07:11:22.896843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.864 [2024-11-20 07:11:22.896882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:40.864 [2024-11-20 07:11:22.896916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.864 [2024-11-20 07:11:22.897378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.864 [2024-11-20 07:11:22.897439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:40.864 [2024-11-20 07:11:22.897537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:40.864 [2024-11-20 07:11:22.897584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:40.864 [2024-11-20 07:11:22.897777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:40.864 [2024-11-20 07:11:22.897816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:40.864 [2024-11-20 07:11:22.898095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:40.864 [2024-11-20 07:11:22.898293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:40.864 [2024-11-20 07:11:22.898346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:40.864 [2024-11-20 07:11:22.898517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.864 pt4 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.864 "name": "raid_bdev1", 00:14:40.864 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:40.864 "strip_size_kb": 64, 00:14:40.864 "state": "online", 00:14:40.864 "raid_level": "concat", 00:14:40.864 "superblock": true, 00:14:40.864 "num_base_bdevs": 4, 00:14:40.864 "num_base_bdevs_discovered": 4, 00:14:40.864 "num_base_bdevs_operational": 4, 00:14:40.864 "base_bdevs_list": [ 00:14:40.864 { 00:14:40.864 "name": "pt1", 00:14:40.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.864 "is_configured": true, 00:14:40.864 "data_offset": 2048, 00:14:40.864 "data_size": 63488 00:14:40.864 }, 00:14:40.864 { 00:14:40.864 "name": "pt2", 00:14:40.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.864 "is_configured": true, 00:14:40.864 "data_offset": 2048, 00:14:40.864 "data_size": 63488 00:14:40.864 }, 00:14:40.864 { 00:14:40.864 "name": "pt3", 00:14:40.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.864 "is_configured": true, 00:14:40.864 "data_offset": 2048, 00:14:40.864 "data_size": 63488 00:14:40.864 }, 00:14:40.864 { 00:14:40.864 "name": "pt4", 00:14:40.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.864 "is_configured": true, 00:14:40.864 "data_offset": 2048, 00:14:40.864 "data_size": 63488 00:14:40.864 } 00:14:40.864 ] 00:14:40.864 }' 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.864 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.122 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.123 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.123 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.123 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.123 [2024-11-20 07:11:23.384420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.381 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.381 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.381 "name": "raid_bdev1", 00:14:41.381 "aliases": [ 00:14:41.381 "af9abb04-8083-4f12-8140-9ce45153095b" 00:14:41.381 ], 00:14:41.381 "product_name": "Raid Volume", 00:14:41.381 "block_size": 512, 00:14:41.381 "num_blocks": 253952, 00:14:41.381 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:41.381 "assigned_rate_limits": { 00:14:41.381 "rw_ios_per_sec": 0, 00:14:41.381 "rw_mbytes_per_sec": 0, 00:14:41.381 "r_mbytes_per_sec": 0, 00:14:41.381 "w_mbytes_per_sec": 0 00:14:41.381 }, 00:14:41.381 "claimed": false, 00:14:41.381 "zoned": false, 00:14:41.381 "supported_io_types": { 00:14:41.381 "read": true, 00:14:41.381 "write": true, 00:14:41.381 "unmap": true, 00:14:41.381 "flush": true, 00:14:41.381 "reset": true, 00:14:41.381 "nvme_admin": false, 00:14:41.381 "nvme_io": false, 00:14:41.381 "nvme_io_md": false, 00:14:41.381 "write_zeroes": true, 00:14:41.381 "zcopy": false, 00:14:41.381 "get_zone_info": false, 00:14:41.381 "zone_management": false, 00:14:41.381 "zone_append": false, 00:14:41.381 "compare": false, 00:14:41.381 "compare_and_write": false, 00:14:41.381 "abort": false, 00:14:41.381 "seek_hole": false, 00:14:41.381 "seek_data": false, 00:14:41.381 "copy": false, 00:14:41.381 "nvme_iov_md": false 00:14:41.381 }, 00:14:41.381 "memory_domains": [ 00:14:41.381 { 00:14:41.381 "dma_device_id": "system", 00:14:41.381 "dma_device_type": 1 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.381 "dma_device_type": 2 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "system", 00:14:41.381 "dma_device_type": 1 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.381 "dma_device_type": 2 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "system", 00:14:41.381 "dma_device_type": 1 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.381 "dma_device_type": 2 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "system", 00:14:41.381 "dma_device_type": 1 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.381 "dma_device_type": 2 00:14:41.381 } 00:14:41.381 ], 00:14:41.381 "driver_specific": { 00:14:41.381 "raid": { 00:14:41.381 "uuid": "af9abb04-8083-4f12-8140-9ce45153095b", 00:14:41.381 "strip_size_kb": 64, 00:14:41.381 "state": "online", 00:14:41.381 "raid_level": "concat", 00:14:41.381 "superblock": true, 00:14:41.381 "num_base_bdevs": 4, 00:14:41.381 "num_base_bdevs_discovered": 4, 00:14:41.381 "num_base_bdevs_operational": 4, 00:14:41.381 "base_bdevs_list": [ 00:14:41.381 { 00:14:41.381 "name": "pt1", 00:14:41.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.381 "is_configured": true, 00:14:41.381 "data_offset": 2048, 00:14:41.381 "data_size": 63488 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "name": "pt2", 00:14:41.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.381 "is_configured": true, 00:14:41.381 "data_offset": 2048, 00:14:41.381 "data_size": 63488 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "name": "pt3", 00:14:41.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.381 "is_configured": true, 00:14:41.381 "data_offset": 2048, 00:14:41.381 "data_size": 63488 00:14:41.381 }, 00:14:41.381 { 00:14:41.381 "name": "pt4", 00:14:41.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.381 "is_configured": true, 00:14:41.381 "data_offset": 2048, 00:14:41.381 "data_size": 63488 00:14:41.381 } 00:14:41.381 ] 00:14:41.381 } 00:14:41.381 } 00:14:41.381 }' 00:14:41.381 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.381 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.381 pt2 00:14:41.381 pt3 00:14:41.382 pt4' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.382 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.688 [2024-11-20 07:11:23.719796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' af9abb04-8083-4f12-8140-9ce45153095b '!=' af9abb04-8083-4f12-8140-9ce45153095b ']' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72956 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72956 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.688 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:14:41.689 killing process with pid 72956 00:14:41.689 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72956 00:14:41.689 [2024-11-20 07:11:23.799469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.689 07:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72956 00:14:41.689 [2024-11-20 07:11:23.799705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.689 [2024-11-20 07:11:23.799806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.689 [2024-11-20 07:11:23.799872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:42.273 [2024-11-20 07:11:24.268350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.650 07:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:43.650 00:14:43.650 real 0m6.084s 00:14:43.650 user 0m8.610s 00:14:43.650 sys 0m1.031s 00:14:43.650 07:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.650 07:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.650 ************************************ 00:14:43.650 END TEST raid_superblock_test 00:14:43.650 ************************************ 00:14:43.650 07:11:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:43.650 07:11:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.650 07:11:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.650 07:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.650 ************************************ 00:14:43.650 START TEST raid_read_error_test 00:14:43.650 ************************************ 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:43.650 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jkXWkDU8il 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73221 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73221 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73221 ']' 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.651 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.651 [2024-11-20 07:11:25.797721] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:43.651 [2024-11-20 07:11:25.797912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73221 ] 00:14:43.909 [2024-11-20 07:11:25.975203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.909 [2024-11-20 07:11:26.119143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.168 [2024-11-20 07:11:26.381179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.168 [2024-11-20 07:11:26.381365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.426 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 BaseBdev1_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 true 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 [2024-11-20 07:11:26.737035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:44.690 [2024-11-20 07:11:26.737213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.690 [2024-11-20 07:11:26.737259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:44.690 [2024-11-20 07:11:26.737343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.690 [2024-11-20 07:11:26.739915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.690 BaseBdev1 00:14:44.690 [2024-11-20 07:11:26.739999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 BaseBdev2_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 true 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 [2024-11-20 07:11:26.813841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.690 [2024-11-20 07:11:26.813990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.690 [2024-11-20 07:11:26.814026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:44.690 [2024-11-20 07:11:26.814060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.690 [2024-11-20 07:11:26.816573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.690 [2024-11-20 07:11:26.816651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.690 BaseBdev2 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 BaseBdev3_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 true 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.690 [2024-11-20 07:11:26.909726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:44.690 [2024-11-20 07:11:26.909905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.690 [2024-11-20 07:11:26.909953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:44.690 [2024-11-20 07:11:26.910003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.690 [2024-11-20 07:11:26.912895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.690 [2024-11-20 07:11:26.912983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.690 BaseBdev3 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.690 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.952 BaseBdev4_malloc 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.952 true 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.952 [2024-11-20 07:11:26.987587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:44.952 [2024-11-20 07:11:26.987724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.952 [2024-11-20 07:11:26.987763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:44.952 [2024-11-20 07:11:26.987795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.952 [2024-11-20 07:11:26.990394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.952 [2024-11-20 07:11:26.990475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.952 BaseBdev4 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.952 07:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.952 [2024-11-20 07:11:26.999661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.952 [2024-11-20 07:11:27.002107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.952 [2024-11-20 07:11:27.002258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.952 [2024-11-20 07:11:27.002386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.952 [2024-11-20 07:11:27.002740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:44.952 [2024-11-20 07:11:27.002798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:44.952 [2024-11-20 07:11:27.003164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:44.952 [2024-11-20 07:11:27.003433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:44.952 [2024-11-20 07:11:27.003484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:44.952 [2024-11-20 07:11:27.003800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.952 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.952 "name": "raid_bdev1", 00:14:44.952 "uuid": "721ca755-bdfb-4c5e-9121-7504d8f9b20d", 00:14:44.952 "strip_size_kb": 64, 00:14:44.952 "state": "online", 00:14:44.952 "raid_level": "concat", 00:14:44.952 "superblock": true, 00:14:44.952 "num_base_bdevs": 4, 00:14:44.952 "num_base_bdevs_discovered": 4, 00:14:44.952 "num_base_bdevs_operational": 4, 00:14:44.952 "base_bdevs_list": [ 00:14:44.952 { 00:14:44.952 "name": "BaseBdev1", 00:14:44.952 "uuid": "87433fbf-b569-55b2-a9a1-666aea9816d0", 00:14:44.952 "is_configured": true, 00:14:44.952 "data_offset": 2048, 00:14:44.952 "data_size": 63488 00:14:44.952 }, 00:14:44.952 { 00:14:44.952 "name": "BaseBdev2", 00:14:44.952 "uuid": "50f32fb7-9e07-5e0b-a0a5-5beff713bbf3", 00:14:44.952 "is_configured": true, 00:14:44.952 "data_offset": 2048, 00:14:44.952 "data_size": 63488 00:14:44.952 }, 00:14:44.952 { 00:14:44.952 "name": "BaseBdev3", 00:14:44.952 "uuid": "3c9bbb82-6b83-5f16-85c1-d9049535e503", 00:14:44.952 "is_configured": true, 00:14:44.952 "data_offset": 2048, 00:14:44.953 "data_size": 63488 00:14:44.953 }, 00:14:44.953 { 00:14:44.953 "name": "BaseBdev4", 00:14:44.953 "uuid": "6be61fe2-5b9a-5db5-80b7-d234e159f0e5", 00:14:44.953 "is_configured": true, 00:14:44.953 "data_offset": 2048, 00:14:44.953 "data_size": 63488 00:14:44.953 } 00:14:44.953 ] 00:14:44.953 }' 00:14:44.953 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.953 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.210 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:45.468 [2024-11-20 07:11:27.544591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.400 "name": "raid_bdev1", 00:14:46.400 "uuid": "721ca755-bdfb-4c5e-9121-7504d8f9b20d", 00:14:46.400 "strip_size_kb": 64, 00:14:46.400 "state": "online", 00:14:46.400 "raid_level": "concat", 00:14:46.400 "superblock": true, 00:14:46.400 "num_base_bdevs": 4, 00:14:46.400 "num_base_bdevs_discovered": 4, 00:14:46.400 "num_base_bdevs_operational": 4, 00:14:46.400 "base_bdevs_list": [ 00:14:46.400 { 00:14:46.400 "name": "BaseBdev1", 00:14:46.400 "uuid": "87433fbf-b569-55b2-a9a1-666aea9816d0", 00:14:46.400 "is_configured": true, 00:14:46.400 "data_offset": 2048, 00:14:46.400 "data_size": 63488 00:14:46.400 }, 00:14:46.400 { 00:14:46.400 "name": "BaseBdev2", 00:14:46.400 "uuid": "50f32fb7-9e07-5e0b-a0a5-5beff713bbf3", 00:14:46.400 "is_configured": true, 00:14:46.400 "data_offset": 2048, 00:14:46.400 "data_size": 63488 00:14:46.400 }, 00:14:46.400 { 00:14:46.400 "name": "BaseBdev3", 00:14:46.400 "uuid": "3c9bbb82-6b83-5f16-85c1-d9049535e503", 00:14:46.400 "is_configured": true, 00:14:46.400 "data_offset": 2048, 00:14:46.400 "data_size": 63488 00:14:46.400 }, 00:14:46.400 { 00:14:46.400 "name": "BaseBdev4", 00:14:46.400 "uuid": "6be61fe2-5b9a-5db5-80b7-d234e159f0e5", 00:14:46.400 "is_configured": true, 00:14:46.400 "data_offset": 2048, 00:14:46.400 "data_size": 63488 00:14:46.400 } 00:14:46.400 ] 00:14:46.400 }' 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.400 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.968 [2024-11-20 07:11:28.954480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.968 [2024-11-20 07:11:28.954633] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.968 [2024-11-20 07:11:28.957572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.968 [2024-11-20 07:11:28.957693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.968 [2024-11-20 07:11:28.957757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.968 [2024-11-20 07:11:28.957779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:46.968 { 00:14:46.968 "results": [ 00:14:46.968 { 00:14:46.968 "job": "raid_bdev1", 00:14:46.968 "core_mask": "0x1", 00:14:46.968 "workload": "randrw", 00:14:46.968 "percentage": 50, 00:14:46.968 "status": "finished", 00:14:46.968 "queue_depth": 1, 00:14:46.968 "io_size": 131072, 00:14:46.968 "runtime": 1.410199, 00:14:46.968 "iops": 12643.605618781463, 00:14:46.968 "mibps": 1580.4507023476829, 00:14:46.968 "io_failed": 1, 00:14:46.968 "io_timeout": 0, 00:14:46.968 "avg_latency_us": 111.34157317404383, 00:14:46.968 "min_latency_us": 27.83580786026201, 00:14:46.968 "max_latency_us": 1717.1004366812226 00:14:46.968 } 00:14:46.968 ], 00:14:46.968 "core_count": 1 00:14:46.968 } 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73221 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73221 ']' 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73221 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73221 00:14:46.968 killing process with pid 73221 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73221' 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73221 00:14:46.968 [2024-11-20 07:11:28.992691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.968 07:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73221 00:14:47.227 [2024-11-20 07:11:29.371528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jkXWkDU8il 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:48.605 ************************************ 00:14:48.605 END TEST raid_read_error_test 00:14:48.605 ************************************ 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:48.605 00:14:48.605 real 0m5.034s 00:14:48.605 user 0m5.821s 00:14:48.605 sys 0m0.703s 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.605 07:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.605 07:11:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:48.605 07:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:48.605 07:11:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.605 07:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.605 ************************************ 00:14:48.605 START TEST raid_write_error_test 00:14:48.605 ************************************ 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CoVrX845tQ 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73372 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73372 00:14:48.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73372 ']' 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.605 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.863 [2024-11-20 07:11:30.901831] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:48.863 [2024-11-20 07:11:30.901960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73372 ] 00:14:48.863 [2024-11-20 07:11:31.078225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.121 [2024-11-20 07:11:31.222091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.381 [2024-11-20 07:11:31.468988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.381 [2024-11-20 07:11:31.469071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.642 BaseBdev1_malloc 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.642 true 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.642 [2024-11-20 07:11:31.863807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:49.642 [2024-11-20 07:11:31.863955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.642 [2024-11-20 07:11:31.864006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:49.642 [2024-11-20 07:11:31.864052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.642 [2024-11-20 07:11:31.866466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.642 [2024-11-20 07:11:31.866546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.642 BaseBdev1 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.642 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 BaseBdev2_malloc 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 true 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 [2024-11-20 07:11:31.938963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:49.912 [2024-11-20 07:11:31.939110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.912 [2024-11-20 07:11:31.939131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:49.912 [2024-11-20 07:11:31.939144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.912 [2024-11-20 07:11:31.941529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.912 [2024-11-20 07:11:31.941569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.912 BaseBdev2 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 BaseBdev3_malloc 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 true 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 [2024-11-20 07:11:32.024859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:49.912 [2024-11-20 07:11:32.025013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.912 [2024-11-20 07:11:32.025061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:49.912 [2024-11-20 07:11:32.025109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.912 [2024-11-20 07:11:32.027680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.912 [2024-11-20 07:11:32.027718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:49.912 BaseBdev3 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 BaseBdev4_malloc 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.912 true 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.912 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.913 [2024-11-20 07:11:32.102306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:49.913 [2024-11-20 07:11:32.102475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.913 [2024-11-20 07:11:32.102512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.913 [2024-11-20 07:11:32.102544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.913 [2024-11-20 07:11:32.104920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.913 [2024-11-20 07:11:32.104993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:49.913 BaseBdev4 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.913 [2024-11-20 07:11:32.114367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.913 [2024-11-20 07:11:32.116508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.913 [2024-11-20 07:11:32.116651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.913 [2024-11-20 07:11:32.116727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.913 [2024-11-20 07:11:32.116987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:49.913 [2024-11-20 07:11:32.117002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:49.913 [2024-11-20 07:11:32.117277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:49.913 [2024-11-20 07:11:32.117458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:49.913 [2024-11-20 07:11:32.117472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:49.913 [2024-11-20 07:11:32.117623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.913 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.185 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.185 "name": "raid_bdev1", 00:14:50.185 "uuid": "50bb31f7-8d8d-4461-9ec8-6771ec0fb2c7", 00:14:50.185 "strip_size_kb": 64, 00:14:50.185 "state": "online", 00:14:50.185 "raid_level": "concat", 00:14:50.185 "superblock": true, 00:14:50.185 "num_base_bdevs": 4, 00:14:50.185 "num_base_bdevs_discovered": 4, 00:14:50.185 "num_base_bdevs_operational": 4, 00:14:50.185 "base_bdevs_list": [ 00:14:50.185 { 00:14:50.185 "name": "BaseBdev1", 00:14:50.185 "uuid": "1a974a4d-3081-5e2a-9841-e97439633296", 00:14:50.185 "is_configured": true, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 }, 00:14:50.185 { 00:14:50.185 "name": "BaseBdev2", 00:14:50.185 "uuid": "8e22b0c8-d74e-5492-b630-71f1032a76e8", 00:14:50.185 "is_configured": true, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 }, 00:14:50.185 { 00:14:50.185 "name": "BaseBdev3", 00:14:50.185 "uuid": "0f8e6217-89e4-515f-b18d-387134412538", 00:14:50.185 "is_configured": true, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 }, 00:14:50.185 { 00:14:50.185 "name": "BaseBdev4", 00:14:50.185 "uuid": "5b3aa3ba-d599-5604-b91e-eb8038181f85", 00:14:50.185 "is_configured": true, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 } 00:14:50.185 ] 00:14:50.185 }' 00:14:50.185 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.185 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.444 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:50.444 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:50.444 [2024-11-20 07:11:32.663044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.382 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.641 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.641 "name": "raid_bdev1", 00:14:51.641 "uuid": "50bb31f7-8d8d-4461-9ec8-6771ec0fb2c7", 00:14:51.641 "strip_size_kb": 64, 00:14:51.641 "state": "online", 00:14:51.641 "raid_level": "concat", 00:14:51.641 "superblock": true, 00:14:51.641 "num_base_bdevs": 4, 00:14:51.641 "num_base_bdevs_discovered": 4, 00:14:51.641 "num_base_bdevs_operational": 4, 00:14:51.641 "base_bdevs_list": [ 00:14:51.641 { 00:14:51.641 "name": "BaseBdev1", 00:14:51.641 "uuid": "1a974a4d-3081-5e2a-9841-e97439633296", 00:14:51.641 "is_configured": true, 00:14:51.641 "data_offset": 2048, 00:14:51.641 "data_size": 63488 00:14:51.641 }, 00:14:51.641 { 00:14:51.641 "name": "BaseBdev2", 00:14:51.641 "uuid": "8e22b0c8-d74e-5492-b630-71f1032a76e8", 00:14:51.642 "is_configured": true, 00:14:51.642 "data_offset": 2048, 00:14:51.642 "data_size": 63488 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "name": "BaseBdev3", 00:14:51.642 "uuid": "0f8e6217-89e4-515f-b18d-387134412538", 00:14:51.642 "is_configured": true, 00:14:51.642 "data_offset": 2048, 00:14:51.642 "data_size": 63488 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "name": "BaseBdev4", 00:14:51.642 "uuid": "5b3aa3ba-d599-5604-b91e-eb8038181f85", 00:14:51.642 "is_configured": true, 00:14:51.642 "data_offset": 2048, 00:14:51.642 "data_size": 63488 00:14:51.642 } 00:14:51.642 ] 00:14:51.642 }' 00:14:51.642 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.642 07:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.901 [2024-11-20 07:11:34.056256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.901 [2024-11-20 07:11:34.056403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.901 [2024-11-20 07:11:34.059234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.901 [2024-11-20 07:11:34.059311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.901 [2024-11-20 07:11:34.059374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.901 [2024-11-20 07:11:34.059393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:51.901 { 00:14:51.901 "results": [ 00:14:51.901 { 00:14:51.901 "job": "raid_bdev1", 00:14:51.901 "core_mask": "0x1", 00:14:51.901 "workload": "randrw", 00:14:51.901 "percentage": 50, 00:14:51.901 "status": "finished", 00:14:51.901 "queue_depth": 1, 00:14:51.901 "io_size": 131072, 00:14:51.901 "runtime": 1.393607, 00:14:51.901 "iops": 12863.741356063798, 00:14:51.901 "mibps": 1607.9676695079747, 00:14:51.901 "io_failed": 1, 00:14:51.901 "io_timeout": 0, 00:14:51.901 "avg_latency_us": 109.68171804150128, 00:14:51.901 "min_latency_us": 26.606113537117903, 00:14:51.901 "max_latency_us": 1416.6078602620087 00:14:51.901 } 00:14:51.901 ], 00:14:51.901 "core_count": 1 00:14:51.901 } 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73372 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73372 ']' 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73372 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73372 00:14:51.901 killing process with pid 73372 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73372' 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73372 00:14:51.901 [2024-11-20 07:11:34.089944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.901 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73372 00:14:52.469 [2024-11-20 07:11:34.465612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CoVrX845tQ 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:53.849 ************************************ 00:14:53.849 END TEST raid_write_error_test 00:14:53.849 ************************************ 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:53.849 00:14:53.849 real 0m5.113s 00:14:53.849 user 0m5.896s 00:14:53.849 sys 0m0.710s 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.849 07:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.849 07:11:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:53.849 07:11:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:53.849 07:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:53.849 07:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.849 07:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.849 ************************************ 00:14:53.849 START TEST raid_state_function_test 00:14:53.849 ************************************ 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73528 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73528' 00:14:53.850 Process raid pid: 73528 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73528 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73528 ']' 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.850 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.850 [2024-11-20 07:11:36.078276] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:53.850 [2024-11-20 07:11:36.078907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.109 [2024-11-20 07:11:36.240266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.369 [2024-11-20 07:11:36.385583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.628 [2024-11-20 07:11:36.643593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.628 [2024-11-20 07:11:36.643660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.888 [2024-11-20 07:11:36.960170] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.888 [2024-11-20 07:11:36.960250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.888 [2024-11-20 07:11:36.960263] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.888 [2024-11-20 07:11:36.960276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.888 [2024-11-20 07:11:36.960283] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.888 [2024-11-20 07:11:36.960295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.888 [2024-11-20 07:11:36.960302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:54.888 [2024-11-20 07:11:36.960313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.888 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.888 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.888 "name": "Existed_Raid", 00:14:54.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.888 "strip_size_kb": 0, 00:14:54.888 "state": "configuring", 00:14:54.888 "raid_level": "raid1", 00:14:54.888 "superblock": false, 00:14:54.888 "num_base_bdevs": 4, 00:14:54.888 "num_base_bdevs_discovered": 0, 00:14:54.888 "num_base_bdevs_operational": 4, 00:14:54.888 "base_bdevs_list": [ 00:14:54.889 { 00:14:54.889 "name": "BaseBdev1", 00:14:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.889 "is_configured": false, 00:14:54.889 "data_offset": 0, 00:14:54.889 "data_size": 0 00:14:54.889 }, 00:14:54.889 { 00:14:54.889 "name": "BaseBdev2", 00:14:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.889 "is_configured": false, 00:14:54.889 "data_offset": 0, 00:14:54.889 "data_size": 0 00:14:54.889 }, 00:14:54.889 { 00:14:54.889 "name": "BaseBdev3", 00:14:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.889 "is_configured": false, 00:14:54.889 "data_offset": 0, 00:14:54.889 "data_size": 0 00:14:54.889 }, 00:14:54.889 { 00:14:54.889 "name": "BaseBdev4", 00:14:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.889 "is_configured": false, 00:14:54.889 "data_offset": 0, 00:14:54.889 "data_size": 0 00:14:54.889 } 00:14:54.889 ] 00:14:54.889 }' 00:14:54.889 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.889 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.458 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.458 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.458 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 [2024-11-20 07:11:37.455326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.459 [2024-11-20 07:11:37.455406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 [2024-11-20 07:11:37.467254] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.459 [2024-11-20 07:11:37.467307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.459 [2024-11-20 07:11:37.467318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.459 [2024-11-20 07:11:37.467329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.459 [2024-11-20 07:11:37.467351] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.459 [2024-11-20 07:11:37.467363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.459 [2024-11-20 07:11:37.467370] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.459 [2024-11-20 07:11:37.467381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 [2024-11-20 07:11:37.525271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.459 BaseBdev1 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 [ 00:14:55.459 { 00:14:55.459 "name": "BaseBdev1", 00:14:55.459 "aliases": [ 00:14:55.459 "1c002585-f5b0-4f6b-8360-a8b218f00a24" 00:14:55.459 ], 00:14:55.459 "product_name": "Malloc disk", 00:14:55.459 "block_size": 512, 00:14:55.459 "num_blocks": 65536, 00:14:55.459 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:55.459 "assigned_rate_limits": { 00:14:55.459 "rw_ios_per_sec": 0, 00:14:55.459 "rw_mbytes_per_sec": 0, 00:14:55.459 "r_mbytes_per_sec": 0, 00:14:55.459 "w_mbytes_per_sec": 0 00:14:55.459 }, 00:14:55.459 "claimed": true, 00:14:55.459 "claim_type": "exclusive_write", 00:14:55.459 "zoned": false, 00:14:55.459 "supported_io_types": { 00:14:55.459 "read": true, 00:14:55.459 "write": true, 00:14:55.459 "unmap": true, 00:14:55.459 "flush": true, 00:14:55.459 "reset": true, 00:14:55.459 "nvme_admin": false, 00:14:55.459 "nvme_io": false, 00:14:55.459 "nvme_io_md": false, 00:14:55.459 "write_zeroes": true, 00:14:55.459 "zcopy": true, 00:14:55.459 "get_zone_info": false, 00:14:55.459 "zone_management": false, 00:14:55.459 "zone_append": false, 00:14:55.459 "compare": false, 00:14:55.459 "compare_and_write": false, 00:14:55.459 "abort": true, 00:14:55.459 "seek_hole": false, 00:14:55.459 "seek_data": false, 00:14:55.459 "copy": true, 00:14:55.459 "nvme_iov_md": false 00:14:55.459 }, 00:14:55.459 "memory_domains": [ 00:14:55.459 { 00:14:55.459 "dma_device_id": "system", 00:14:55.459 "dma_device_type": 1 00:14:55.459 }, 00:14:55.459 { 00:14:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.459 "dma_device_type": 2 00:14:55.459 } 00:14:55.459 ], 00:14:55.459 "driver_specific": {} 00:14:55.459 } 00:14:55.459 ] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.459 "name": "Existed_Raid", 00:14:55.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.459 "strip_size_kb": 0, 00:14:55.459 "state": "configuring", 00:14:55.459 "raid_level": "raid1", 00:14:55.459 "superblock": false, 00:14:55.459 "num_base_bdevs": 4, 00:14:55.459 "num_base_bdevs_discovered": 1, 00:14:55.459 "num_base_bdevs_operational": 4, 00:14:55.459 "base_bdevs_list": [ 00:14:55.459 { 00:14:55.459 "name": "BaseBdev1", 00:14:55.459 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:55.459 "is_configured": true, 00:14:55.459 "data_offset": 0, 00:14:55.459 "data_size": 65536 00:14:55.459 }, 00:14:55.459 { 00:14:55.459 "name": "BaseBdev2", 00:14:55.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.459 "is_configured": false, 00:14:55.459 "data_offset": 0, 00:14:55.459 "data_size": 0 00:14:55.459 }, 00:14:55.459 { 00:14:55.459 "name": "BaseBdev3", 00:14:55.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.459 "is_configured": false, 00:14:55.459 "data_offset": 0, 00:14:55.459 "data_size": 0 00:14:55.459 }, 00:14:55.459 { 00:14:55.459 "name": "BaseBdev4", 00:14:55.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.459 "is_configured": false, 00:14:55.459 "data_offset": 0, 00:14:55.459 "data_size": 0 00:14:55.459 } 00:14:55.459 ] 00:14:55.459 }' 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.459 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.027 [2024-11-20 07:11:38.020638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.027 [2024-11-20 07:11:38.020736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.027 [2024-11-20 07:11:38.032685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.027 [2024-11-20 07:11:38.035079] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.027 [2024-11-20 07:11:38.035128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.027 [2024-11-20 07:11:38.035138] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.027 [2024-11-20 07:11:38.035150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.027 [2024-11-20 07:11:38.035157] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.027 [2024-11-20 07:11:38.035166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.027 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.027 "name": "Existed_Raid", 00:14:56.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.027 "strip_size_kb": 0, 00:14:56.027 "state": "configuring", 00:14:56.027 "raid_level": "raid1", 00:14:56.028 "superblock": false, 00:14:56.028 "num_base_bdevs": 4, 00:14:56.028 "num_base_bdevs_discovered": 1, 00:14:56.028 "num_base_bdevs_operational": 4, 00:14:56.028 "base_bdevs_list": [ 00:14:56.028 { 00:14:56.028 "name": "BaseBdev1", 00:14:56.028 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:56.028 "is_configured": true, 00:14:56.028 "data_offset": 0, 00:14:56.028 "data_size": 65536 00:14:56.028 }, 00:14:56.028 { 00:14:56.028 "name": "BaseBdev2", 00:14:56.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.028 "is_configured": false, 00:14:56.028 "data_offset": 0, 00:14:56.028 "data_size": 0 00:14:56.028 }, 00:14:56.028 { 00:14:56.028 "name": "BaseBdev3", 00:14:56.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.028 "is_configured": false, 00:14:56.028 "data_offset": 0, 00:14:56.028 "data_size": 0 00:14:56.028 }, 00:14:56.028 { 00:14:56.028 "name": "BaseBdev4", 00:14:56.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.028 "is_configured": false, 00:14:56.028 "data_offset": 0, 00:14:56.028 "data_size": 0 00:14:56.028 } 00:14:56.028 ] 00:14:56.028 }' 00:14:56.028 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.028 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.286 [2024-11-20 07:11:38.547300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.286 BaseBdev2 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.286 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.554 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.554 [ 00:14:56.554 { 00:14:56.554 "name": "BaseBdev2", 00:14:56.554 "aliases": [ 00:14:56.554 "87d6196e-8138-4385-bc38-58da76674fd5" 00:14:56.554 ], 00:14:56.554 "product_name": "Malloc disk", 00:14:56.554 "block_size": 512, 00:14:56.554 "num_blocks": 65536, 00:14:56.554 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:56.554 "assigned_rate_limits": { 00:14:56.554 "rw_ios_per_sec": 0, 00:14:56.554 "rw_mbytes_per_sec": 0, 00:14:56.554 "r_mbytes_per_sec": 0, 00:14:56.554 "w_mbytes_per_sec": 0 00:14:56.554 }, 00:14:56.554 "claimed": true, 00:14:56.554 "claim_type": "exclusive_write", 00:14:56.554 "zoned": false, 00:14:56.554 "supported_io_types": { 00:14:56.554 "read": true, 00:14:56.554 "write": true, 00:14:56.554 "unmap": true, 00:14:56.554 "flush": true, 00:14:56.554 "reset": true, 00:14:56.554 "nvme_admin": false, 00:14:56.554 "nvme_io": false, 00:14:56.554 "nvme_io_md": false, 00:14:56.554 "write_zeroes": true, 00:14:56.554 "zcopy": true, 00:14:56.554 "get_zone_info": false, 00:14:56.554 "zone_management": false, 00:14:56.554 "zone_append": false, 00:14:56.554 "compare": false, 00:14:56.554 "compare_and_write": false, 00:14:56.554 "abort": true, 00:14:56.554 "seek_hole": false, 00:14:56.555 "seek_data": false, 00:14:56.555 "copy": true, 00:14:56.555 "nvme_iov_md": false 00:14:56.555 }, 00:14:56.555 "memory_domains": [ 00:14:56.555 { 00:14:56.555 "dma_device_id": "system", 00:14:56.555 "dma_device_type": 1 00:14:56.555 }, 00:14:56.555 { 00:14:56.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.555 "dma_device_type": 2 00:14:56.555 } 00:14:56.555 ], 00:14:56.555 "driver_specific": {} 00:14:56.555 } 00:14:56.555 ] 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.555 "name": "Existed_Raid", 00:14:56.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.555 "strip_size_kb": 0, 00:14:56.555 "state": "configuring", 00:14:56.555 "raid_level": "raid1", 00:14:56.555 "superblock": false, 00:14:56.555 "num_base_bdevs": 4, 00:14:56.555 "num_base_bdevs_discovered": 2, 00:14:56.555 "num_base_bdevs_operational": 4, 00:14:56.555 "base_bdevs_list": [ 00:14:56.555 { 00:14:56.555 "name": "BaseBdev1", 00:14:56.555 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:56.555 "is_configured": true, 00:14:56.555 "data_offset": 0, 00:14:56.555 "data_size": 65536 00:14:56.555 }, 00:14:56.555 { 00:14:56.555 "name": "BaseBdev2", 00:14:56.555 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:56.555 "is_configured": true, 00:14:56.555 "data_offset": 0, 00:14:56.555 "data_size": 65536 00:14:56.555 }, 00:14:56.555 { 00:14:56.555 "name": "BaseBdev3", 00:14:56.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.555 "is_configured": false, 00:14:56.555 "data_offset": 0, 00:14:56.555 "data_size": 0 00:14:56.555 }, 00:14:56.555 { 00:14:56.555 "name": "BaseBdev4", 00:14:56.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.555 "is_configured": false, 00:14:56.555 "data_offset": 0, 00:14:56.555 "data_size": 0 00:14:56.555 } 00:14:56.555 ] 00:14:56.555 }' 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.555 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.815 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.815 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.815 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.815 [2024-11-20 07:11:39.022809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.815 BaseBdev3 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.815 [ 00:14:56.815 { 00:14:56.815 "name": "BaseBdev3", 00:14:56.815 "aliases": [ 00:14:56.815 "be411c0e-027c-4440-91a3-4180b953ec7e" 00:14:56.815 ], 00:14:56.815 "product_name": "Malloc disk", 00:14:56.815 "block_size": 512, 00:14:56.815 "num_blocks": 65536, 00:14:56.815 "uuid": "be411c0e-027c-4440-91a3-4180b953ec7e", 00:14:56.815 "assigned_rate_limits": { 00:14:56.815 "rw_ios_per_sec": 0, 00:14:56.815 "rw_mbytes_per_sec": 0, 00:14:56.815 "r_mbytes_per_sec": 0, 00:14:56.815 "w_mbytes_per_sec": 0 00:14:56.815 }, 00:14:56.815 "claimed": true, 00:14:56.815 "claim_type": "exclusive_write", 00:14:56.815 "zoned": false, 00:14:56.815 "supported_io_types": { 00:14:56.815 "read": true, 00:14:56.815 "write": true, 00:14:56.815 "unmap": true, 00:14:56.815 "flush": true, 00:14:56.815 "reset": true, 00:14:56.815 "nvme_admin": false, 00:14:56.815 "nvme_io": false, 00:14:56.815 "nvme_io_md": false, 00:14:56.815 "write_zeroes": true, 00:14:56.815 "zcopy": true, 00:14:56.815 "get_zone_info": false, 00:14:56.815 "zone_management": false, 00:14:56.815 "zone_append": false, 00:14:56.815 "compare": false, 00:14:56.815 "compare_and_write": false, 00:14:56.815 "abort": true, 00:14:56.815 "seek_hole": false, 00:14:56.815 "seek_data": false, 00:14:56.815 "copy": true, 00:14:56.815 "nvme_iov_md": false 00:14:56.815 }, 00:14:56.815 "memory_domains": [ 00:14:56.815 { 00:14:56.815 "dma_device_id": "system", 00:14:56.815 "dma_device_type": 1 00:14:56.815 }, 00:14:56.815 { 00:14:56.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.815 "dma_device_type": 2 00:14:56.815 } 00:14:56.815 ], 00:14:56.815 "driver_specific": {} 00:14:56.815 } 00:14:56.815 ] 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.815 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.075 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.075 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.075 "name": "Existed_Raid", 00:14:57.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.075 "strip_size_kb": 0, 00:14:57.075 "state": "configuring", 00:14:57.075 "raid_level": "raid1", 00:14:57.075 "superblock": false, 00:14:57.075 "num_base_bdevs": 4, 00:14:57.075 "num_base_bdevs_discovered": 3, 00:14:57.075 "num_base_bdevs_operational": 4, 00:14:57.075 "base_bdevs_list": [ 00:14:57.075 { 00:14:57.075 "name": "BaseBdev1", 00:14:57.075 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:57.075 "is_configured": true, 00:14:57.075 "data_offset": 0, 00:14:57.075 "data_size": 65536 00:14:57.075 }, 00:14:57.075 { 00:14:57.075 "name": "BaseBdev2", 00:14:57.075 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:57.075 "is_configured": true, 00:14:57.075 "data_offset": 0, 00:14:57.075 "data_size": 65536 00:14:57.075 }, 00:14:57.075 { 00:14:57.075 "name": "BaseBdev3", 00:14:57.075 "uuid": "be411c0e-027c-4440-91a3-4180b953ec7e", 00:14:57.075 "is_configured": true, 00:14:57.075 "data_offset": 0, 00:14:57.075 "data_size": 65536 00:14:57.075 }, 00:14:57.075 { 00:14:57.075 "name": "BaseBdev4", 00:14:57.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.075 "is_configured": false, 00:14:57.075 "data_offset": 0, 00:14:57.075 "data_size": 0 00:14:57.075 } 00:14:57.075 ] 00:14:57.075 }' 00:14:57.075 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.075 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.335 [2024-11-20 07:11:39.545562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.335 [2024-11-20 07:11:39.545629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.335 [2024-11-20 07:11:39.545638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:57.335 [2024-11-20 07:11:39.545959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:57.335 [2024-11-20 07:11:39.546168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.335 [2024-11-20 07:11:39.546202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:57.335 [2024-11-20 07:11:39.546521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.335 BaseBdev4 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.335 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.336 [ 00:14:57.336 { 00:14:57.336 "name": "BaseBdev4", 00:14:57.336 "aliases": [ 00:14:57.336 "867f75d7-c032-48bf-a0c2-64246d306a76" 00:14:57.336 ], 00:14:57.336 "product_name": "Malloc disk", 00:14:57.336 "block_size": 512, 00:14:57.336 "num_blocks": 65536, 00:14:57.336 "uuid": "867f75d7-c032-48bf-a0c2-64246d306a76", 00:14:57.336 "assigned_rate_limits": { 00:14:57.336 "rw_ios_per_sec": 0, 00:14:57.336 "rw_mbytes_per_sec": 0, 00:14:57.336 "r_mbytes_per_sec": 0, 00:14:57.336 "w_mbytes_per_sec": 0 00:14:57.336 }, 00:14:57.336 "claimed": true, 00:14:57.336 "claim_type": "exclusive_write", 00:14:57.336 "zoned": false, 00:14:57.336 "supported_io_types": { 00:14:57.336 "read": true, 00:14:57.336 "write": true, 00:14:57.336 "unmap": true, 00:14:57.336 "flush": true, 00:14:57.336 "reset": true, 00:14:57.336 "nvme_admin": false, 00:14:57.336 "nvme_io": false, 00:14:57.336 "nvme_io_md": false, 00:14:57.336 "write_zeroes": true, 00:14:57.336 "zcopy": true, 00:14:57.336 "get_zone_info": false, 00:14:57.336 "zone_management": false, 00:14:57.336 "zone_append": false, 00:14:57.336 "compare": false, 00:14:57.336 "compare_and_write": false, 00:14:57.336 "abort": true, 00:14:57.336 "seek_hole": false, 00:14:57.336 "seek_data": false, 00:14:57.336 "copy": true, 00:14:57.336 "nvme_iov_md": false 00:14:57.336 }, 00:14:57.336 "memory_domains": [ 00:14:57.336 { 00:14:57.336 "dma_device_id": "system", 00:14:57.336 "dma_device_type": 1 00:14:57.336 }, 00:14:57.336 { 00:14:57.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.336 "dma_device_type": 2 00:14:57.336 } 00:14:57.336 ], 00:14:57.336 "driver_specific": {} 00:14:57.336 } 00:14:57.336 ] 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.336 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.595 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.595 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.595 "name": "Existed_Raid", 00:14:57.595 "uuid": "cc076d1e-1ada-4393-b8b0-610735398f8c", 00:14:57.595 "strip_size_kb": 0, 00:14:57.595 "state": "online", 00:14:57.595 "raid_level": "raid1", 00:14:57.595 "superblock": false, 00:14:57.595 "num_base_bdevs": 4, 00:14:57.595 "num_base_bdevs_discovered": 4, 00:14:57.596 "num_base_bdevs_operational": 4, 00:14:57.596 "base_bdevs_list": [ 00:14:57.596 { 00:14:57.596 "name": "BaseBdev1", 00:14:57.596 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:57.596 "is_configured": true, 00:14:57.596 "data_offset": 0, 00:14:57.596 "data_size": 65536 00:14:57.596 }, 00:14:57.596 { 00:14:57.596 "name": "BaseBdev2", 00:14:57.596 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:57.596 "is_configured": true, 00:14:57.596 "data_offset": 0, 00:14:57.596 "data_size": 65536 00:14:57.596 }, 00:14:57.596 { 00:14:57.596 "name": "BaseBdev3", 00:14:57.596 "uuid": "be411c0e-027c-4440-91a3-4180b953ec7e", 00:14:57.596 "is_configured": true, 00:14:57.596 "data_offset": 0, 00:14:57.596 "data_size": 65536 00:14:57.596 }, 00:14:57.596 { 00:14:57.596 "name": "BaseBdev4", 00:14:57.596 "uuid": "867f75d7-c032-48bf-a0c2-64246d306a76", 00:14:57.596 "is_configured": true, 00:14:57.596 "data_offset": 0, 00:14:57.596 "data_size": 65536 00:14:57.596 } 00:14:57.596 ] 00:14:57.596 }' 00:14:57.596 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.596 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.856 [2024-11-20 07:11:40.057205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.856 "name": "Existed_Raid", 00:14:57.856 "aliases": [ 00:14:57.856 "cc076d1e-1ada-4393-b8b0-610735398f8c" 00:14:57.856 ], 00:14:57.856 "product_name": "Raid Volume", 00:14:57.856 "block_size": 512, 00:14:57.856 "num_blocks": 65536, 00:14:57.856 "uuid": "cc076d1e-1ada-4393-b8b0-610735398f8c", 00:14:57.856 "assigned_rate_limits": { 00:14:57.856 "rw_ios_per_sec": 0, 00:14:57.856 "rw_mbytes_per_sec": 0, 00:14:57.856 "r_mbytes_per_sec": 0, 00:14:57.856 "w_mbytes_per_sec": 0 00:14:57.856 }, 00:14:57.856 "claimed": false, 00:14:57.856 "zoned": false, 00:14:57.856 "supported_io_types": { 00:14:57.856 "read": true, 00:14:57.856 "write": true, 00:14:57.856 "unmap": false, 00:14:57.856 "flush": false, 00:14:57.856 "reset": true, 00:14:57.856 "nvme_admin": false, 00:14:57.856 "nvme_io": false, 00:14:57.856 "nvme_io_md": false, 00:14:57.856 "write_zeroes": true, 00:14:57.856 "zcopy": false, 00:14:57.856 "get_zone_info": false, 00:14:57.856 "zone_management": false, 00:14:57.856 "zone_append": false, 00:14:57.856 "compare": false, 00:14:57.856 "compare_and_write": false, 00:14:57.856 "abort": false, 00:14:57.856 "seek_hole": false, 00:14:57.856 "seek_data": false, 00:14:57.856 "copy": false, 00:14:57.856 "nvme_iov_md": false 00:14:57.856 }, 00:14:57.856 "memory_domains": [ 00:14:57.856 { 00:14:57.856 "dma_device_id": "system", 00:14:57.856 "dma_device_type": 1 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.856 "dma_device_type": 2 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "system", 00:14:57.856 "dma_device_type": 1 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.856 "dma_device_type": 2 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "system", 00:14:57.856 "dma_device_type": 1 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.856 "dma_device_type": 2 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "system", 00:14:57.856 "dma_device_type": 1 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.856 "dma_device_type": 2 00:14:57.856 } 00:14:57.856 ], 00:14:57.856 "driver_specific": { 00:14:57.856 "raid": { 00:14:57.856 "uuid": "cc076d1e-1ada-4393-b8b0-610735398f8c", 00:14:57.856 "strip_size_kb": 0, 00:14:57.856 "state": "online", 00:14:57.856 "raid_level": "raid1", 00:14:57.856 "superblock": false, 00:14:57.856 "num_base_bdevs": 4, 00:14:57.856 "num_base_bdevs_discovered": 4, 00:14:57.856 "num_base_bdevs_operational": 4, 00:14:57.856 "base_bdevs_list": [ 00:14:57.856 { 00:14:57.856 "name": "BaseBdev1", 00:14:57.856 "uuid": "1c002585-f5b0-4f6b-8360-a8b218f00a24", 00:14:57.856 "is_configured": true, 00:14:57.856 "data_offset": 0, 00:14:57.856 "data_size": 65536 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "name": "BaseBdev2", 00:14:57.856 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:57.856 "is_configured": true, 00:14:57.856 "data_offset": 0, 00:14:57.856 "data_size": 65536 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "name": "BaseBdev3", 00:14:57.856 "uuid": "be411c0e-027c-4440-91a3-4180b953ec7e", 00:14:57.856 "is_configured": true, 00:14:57.856 "data_offset": 0, 00:14:57.856 "data_size": 65536 00:14:57.856 }, 00:14:57.856 { 00:14:57.856 "name": "BaseBdev4", 00:14:57.856 "uuid": "867f75d7-c032-48bf-a0c2-64246d306a76", 00:14:57.856 "is_configured": true, 00:14:57.856 "data_offset": 0, 00:14:57.856 "data_size": 65536 00:14:57.856 } 00:14:57.856 ] 00:14:57.856 } 00:14:57.856 } 00:14:57.856 }' 00:14:57.856 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.116 BaseBdev2 00:14:58.116 BaseBdev3 00:14:58.116 BaseBdev4' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.116 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.116 [2024-11-20 07:11:40.348409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.375 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.375 "name": "Existed_Raid", 00:14:58.375 "uuid": "cc076d1e-1ada-4393-b8b0-610735398f8c", 00:14:58.375 "strip_size_kb": 0, 00:14:58.375 "state": "online", 00:14:58.375 "raid_level": "raid1", 00:14:58.375 "superblock": false, 00:14:58.375 "num_base_bdevs": 4, 00:14:58.375 "num_base_bdevs_discovered": 3, 00:14:58.375 "num_base_bdevs_operational": 3, 00:14:58.375 "base_bdevs_list": [ 00:14:58.375 { 00:14:58.375 "name": null, 00:14:58.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.375 "is_configured": false, 00:14:58.375 "data_offset": 0, 00:14:58.375 "data_size": 65536 00:14:58.375 }, 00:14:58.375 { 00:14:58.375 "name": "BaseBdev2", 00:14:58.375 "uuid": "87d6196e-8138-4385-bc38-58da76674fd5", 00:14:58.375 "is_configured": true, 00:14:58.375 "data_offset": 0, 00:14:58.375 "data_size": 65536 00:14:58.375 }, 00:14:58.375 { 00:14:58.375 "name": "BaseBdev3", 00:14:58.375 "uuid": "be411c0e-027c-4440-91a3-4180b953ec7e", 00:14:58.376 "is_configured": true, 00:14:58.376 "data_offset": 0, 00:14:58.376 "data_size": 65536 00:14:58.376 }, 00:14:58.376 { 00:14:58.376 "name": "BaseBdev4", 00:14:58.376 "uuid": "867f75d7-c032-48bf-a0c2-64246d306a76", 00:14:58.376 "is_configured": true, 00:14:58.376 "data_offset": 0, 00:14:58.376 "data_size": 65536 00:14:58.376 } 00:14:58.376 ] 00:14:58.376 }' 00:14:58.376 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.376 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.944 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.945 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.945 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.945 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:58.945 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.945 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.945 [2024-11-20 07:11:40.973921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.945 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.945 [2024-11-20 07:11:41.153700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.204 [2024-11-20 07:11:41.327909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:59.204 [2024-11-20 07:11:41.328055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.204 [2024-11-20 07:11:41.447763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.204 [2024-11-20 07:11:41.447856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.204 [2024-11-20 07:11:41.447897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.204 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.465 BaseBdev2 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.465 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.465 [ 00:14:59.465 { 00:14:59.465 "name": "BaseBdev2", 00:14:59.465 "aliases": [ 00:14:59.465 "cf0da111-a2aa-44c4-b6db-6d50945dd161" 00:14:59.465 ], 00:14:59.465 "product_name": "Malloc disk", 00:14:59.465 "block_size": 512, 00:14:59.465 "num_blocks": 65536, 00:14:59.465 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:14:59.465 "assigned_rate_limits": { 00:14:59.465 "rw_ios_per_sec": 0, 00:14:59.465 "rw_mbytes_per_sec": 0, 00:14:59.465 "r_mbytes_per_sec": 0, 00:14:59.465 "w_mbytes_per_sec": 0 00:14:59.465 }, 00:14:59.465 "claimed": false, 00:14:59.465 "zoned": false, 00:14:59.465 "supported_io_types": { 00:14:59.465 "read": true, 00:14:59.465 "write": true, 00:14:59.465 "unmap": true, 00:14:59.465 "flush": true, 00:14:59.465 "reset": true, 00:14:59.465 "nvme_admin": false, 00:14:59.465 "nvme_io": false, 00:14:59.465 "nvme_io_md": false, 00:14:59.465 "write_zeroes": true, 00:14:59.465 "zcopy": true, 00:14:59.465 "get_zone_info": false, 00:14:59.465 "zone_management": false, 00:14:59.465 "zone_append": false, 00:14:59.466 "compare": false, 00:14:59.466 "compare_and_write": false, 00:14:59.466 "abort": true, 00:14:59.466 "seek_hole": false, 00:14:59.466 "seek_data": false, 00:14:59.466 "copy": true, 00:14:59.466 "nvme_iov_md": false 00:14:59.466 }, 00:14:59.466 "memory_domains": [ 00:14:59.466 { 00:14:59.466 "dma_device_id": "system", 00:14:59.466 "dma_device_type": 1 00:14:59.466 }, 00:14:59.466 { 00:14:59.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.466 "dma_device_type": 2 00:14:59.466 } 00:14:59.466 ], 00:14:59.466 "driver_specific": {} 00:14:59.466 } 00:14:59.466 ] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.466 BaseBdev3 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.466 [ 00:14:59.466 { 00:14:59.466 "name": "BaseBdev3", 00:14:59.466 "aliases": [ 00:14:59.466 "69164eee-2afe-42d8-9ab3-0e8ba85c30cd" 00:14:59.466 ], 00:14:59.466 "product_name": "Malloc disk", 00:14:59.466 "block_size": 512, 00:14:59.466 "num_blocks": 65536, 00:14:59.466 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:14:59.466 "assigned_rate_limits": { 00:14:59.466 "rw_ios_per_sec": 0, 00:14:59.466 "rw_mbytes_per_sec": 0, 00:14:59.466 "r_mbytes_per_sec": 0, 00:14:59.466 "w_mbytes_per_sec": 0 00:14:59.466 }, 00:14:59.466 "claimed": false, 00:14:59.466 "zoned": false, 00:14:59.466 "supported_io_types": { 00:14:59.466 "read": true, 00:14:59.466 "write": true, 00:14:59.466 "unmap": true, 00:14:59.466 "flush": true, 00:14:59.466 "reset": true, 00:14:59.466 "nvme_admin": false, 00:14:59.466 "nvme_io": false, 00:14:59.466 "nvme_io_md": false, 00:14:59.466 "write_zeroes": true, 00:14:59.466 "zcopy": true, 00:14:59.466 "get_zone_info": false, 00:14:59.466 "zone_management": false, 00:14:59.466 "zone_append": false, 00:14:59.466 "compare": false, 00:14:59.466 "compare_and_write": false, 00:14:59.466 "abort": true, 00:14:59.466 "seek_hole": false, 00:14:59.466 "seek_data": false, 00:14:59.466 "copy": true, 00:14:59.466 "nvme_iov_md": false 00:14:59.466 }, 00:14:59.466 "memory_domains": [ 00:14:59.466 { 00:14:59.466 "dma_device_id": "system", 00:14:59.466 "dma_device_type": 1 00:14:59.466 }, 00:14:59.466 { 00:14:59.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.466 "dma_device_type": 2 00:14:59.466 } 00:14:59.466 ], 00:14:59.466 "driver_specific": {} 00:14:59.466 } 00:14:59.466 ] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.466 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.726 BaseBdev4 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.726 [ 00:14:59.726 { 00:14:59.726 "name": "BaseBdev4", 00:14:59.726 "aliases": [ 00:14:59.726 "680b08b3-82b5-4daa-8950-3470c23762d2" 00:14:59.726 ], 00:14:59.726 "product_name": "Malloc disk", 00:14:59.726 "block_size": 512, 00:14:59.726 "num_blocks": 65536, 00:14:59.726 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:14:59.726 "assigned_rate_limits": { 00:14:59.726 "rw_ios_per_sec": 0, 00:14:59.726 "rw_mbytes_per_sec": 0, 00:14:59.726 "r_mbytes_per_sec": 0, 00:14:59.726 "w_mbytes_per_sec": 0 00:14:59.726 }, 00:14:59.726 "claimed": false, 00:14:59.726 "zoned": false, 00:14:59.726 "supported_io_types": { 00:14:59.726 "read": true, 00:14:59.726 "write": true, 00:14:59.726 "unmap": true, 00:14:59.726 "flush": true, 00:14:59.726 "reset": true, 00:14:59.726 "nvme_admin": false, 00:14:59.726 "nvme_io": false, 00:14:59.726 "nvme_io_md": false, 00:14:59.726 "write_zeroes": true, 00:14:59.726 "zcopy": true, 00:14:59.726 "get_zone_info": false, 00:14:59.726 "zone_management": false, 00:14:59.726 "zone_append": false, 00:14:59.726 "compare": false, 00:14:59.726 "compare_and_write": false, 00:14:59.726 "abort": true, 00:14:59.726 "seek_hole": false, 00:14:59.726 "seek_data": false, 00:14:59.726 "copy": true, 00:14:59.726 "nvme_iov_md": false 00:14:59.726 }, 00:14:59.726 "memory_domains": [ 00:14:59.726 { 00:14:59.726 "dma_device_id": "system", 00:14:59.726 "dma_device_type": 1 00:14:59.726 }, 00:14:59.726 { 00:14:59.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.726 "dma_device_type": 2 00:14:59.726 } 00:14:59.726 ], 00:14:59.726 "driver_specific": {} 00:14:59.726 } 00:14:59.726 ] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.726 [2024-11-20 07:11:41.769440] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.726 [2024-11-20 07:11:41.769507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.726 [2024-11-20 07:11:41.769533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.726 [2024-11-20 07:11:41.771960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.726 [2024-11-20 07:11:41.772024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.726 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.727 "name": "Existed_Raid", 00:14:59.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.727 "strip_size_kb": 0, 00:14:59.727 "state": "configuring", 00:14:59.727 "raid_level": "raid1", 00:14:59.727 "superblock": false, 00:14:59.727 "num_base_bdevs": 4, 00:14:59.727 "num_base_bdevs_discovered": 3, 00:14:59.727 "num_base_bdevs_operational": 4, 00:14:59.727 "base_bdevs_list": [ 00:14:59.727 { 00:14:59.727 "name": "BaseBdev1", 00:14:59.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.727 "is_configured": false, 00:14:59.727 "data_offset": 0, 00:14:59.727 "data_size": 0 00:14:59.727 }, 00:14:59.727 { 00:14:59.727 "name": "BaseBdev2", 00:14:59.727 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:14:59.727 "is_configured": true, 00:14:59.727 "data_offset": 0, 00:14:59.727 "data_size": 65536 00:14:59.727 }, 00:14:59.727 { 00:14:59.727 "name": "BaseBdev3", 00:14:59.727 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:14:59.727 "is_configured": true, 00:14:59.727 "data_offset": 0, 00:14:59.727 "data_size": 65536 00:14:59.727 }, 00:14:59.727 { 00:14:59.727 "name": "BaseBdev4", 00:14:59.727 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:14:59.727 "is_configured": true, 00:14:59.727 "data_offset": 0, 00:14:59.727 "data_size": 65536 00:14:59.727 } 00:14:59.727 ] 00:14:59.727 }' 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.727 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.987 [2024-11-20 07:11:42.188857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.987 "name": "Existed_Raid", 00:14:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.987 "strip_size_kb": 0, 00:14:59.987 "state": "configuring", 00:14:59.987 "raid_level": "raid1", 00:14:59.987 "superblock": false, 00:14:59.987 "num_base_bdevs": 4, 00:14:59.987 "num_base_bdevs_discovered": 2, 00:14:59.987 "num_base_bdevs_operational": 4, 00:14:59.987 "base_bdevs_list": [ 00:14:59.987 { 00:14:59.987 "name": "BaseBdev1", 00:14:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.987 "is_configured": false, 00:14:59.987 "data_offset": 0, 00:14:59.987 "data_size": 0 00:14:59.987 }, 00:14:59.987 { 00:14:59.987 "name": null, 00:14:59.987 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:14:59.987 "is_configured": false, 00:14:59.987 "data_offset": 0, 00:14:59.987 "data_size": 65536 00:14:59.987 }, 00:14:59.987 { 00:14:59.987 "name": "BaseBdev3", 00:14:59.987 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:14:59.987 "is_configured": true, 00:14:59.987 "data_offset": 0, 00:14:59.987 "data_size": 65536 00:14:59.987 }, 00:14:59.987 { 00:14:59.987 "name": "BaseBdev4", 00:14:59.987 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:14:59.987 "is_configured": true, 00:14:59.987 "data_offset": 0, 00:14:59.987 "data_size": 65536 00:14:59.987 } 00:14:59.987 ] 00:14:59.987 }' 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.987 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 [2024-11-20 07:11:42.781082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.562 BaseBdev1 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.562 [ 00:15:00.562 { 00:15:00.562 "name": "BaseBdev1", 00:15:00.562 "aliases": [ 00:15:00.562 "2239d926-eb27-4eba-ab95-ddf990fad8dd" 00:15:00.562 ], 00:15:00.562 "product_name": "Malloc disk", 00:15:00.562 "block_size": 512, 00:15:00.562 "num_blocks": 65536, 00:15:00.562 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:00.562 "assigned_rate_limits": { 00:15:00.562 "rw_ios_per_sec": 0, 00:15:00.562 "rw_mbytes_per_sec": 0, 00:15:00.562 "r_mbytes_per_sec": 0, 00:15:00.562 "w_mbytes_per_sec": 0 00:15:00.562 }, 00:15:00.562 "claimed": true, 00:15:00.562 "claim_type": "exclusive_write", 00:15:00.562 "zoned": false, 00:15:00.562 "supported_io_types": { 00:15:00.562 "read": true, 00:15:00.562 "write": true, 00:15:00.562 "unmap": true, 00:15:00.562 "flush": true, 00:15:00.562 "reset": true, 00:15:00.562 "nvme_admin": false, 00:15:00.562 "nvme_io": false, 00:15:00.562 "nvme_io_md": false, 00:15:00.562 "write_zeroes": true, 00:15:00.562 "zcopy": true, 00:15:00.562 "get_zone_info": false, 00:15:00.562 "zone_management": false, 00:15:00.562 "zone_append": false, 00:15:00.562 "compare": false, 00:15:00.562 "compare_and_write": false, 00:15:00.562 "abort": true, 00:15:00.562 "seek_hole": false, 00:15:00.562 "seek_data": false, 00:15:00.562 "copy": true, 00:15:00.562 "nvme_iov_md": false 00:15:00.562 }, 00:15:00.562 "memory_domains": [ 00:15:00.562 { 00:15:00.562 "dma_device_id": "system", 00:15:00.562 "dma_device_type": 1 00:15:00.562 }, 00:15:00.562 { 00:15:00.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.562 "dma_device_type": 2 00:15:00.562 } 00:15:00.562 ], 00:15:00.562 "driver_specific": {} 00:15:00.562 } 00:15:00.562 ] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.562 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.858 "name": "Existed_Raid", 00:15:00.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.858 "strip_size_kb": 0, 00:15:00.858 "state": "configuring", 00:15:00.858 "raid_level": "raid1", 00:15:00.858 "superblock": false, 00:15:00.858 "num_base_bdevs": 4, 00:15:00.858 "num_base_bdevs_discovered": 3, 00:15:00.858 "num_base_bdevs_operational": 4, 00:15:00.858 "base_bdevs_list": [ 00:15:00.858 { 00:15:00.858 "name": "BaseBdev1", 00:15:00.858 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:00.858 "is_configured": true, 00:15:00.858 "data_offset": 0, 00:15:00.858 "data_size": 65536 00:15:00.858 }, 00:15:00.858 { 00:15:00.858 "name": null, 00:15:00.858 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:00.858 "is_configured": false, 00:15:00.858 "data_offset": 0, 00:15:00.858 "data_size": 65536 00:15:00.858 }, 00:15:00.858 { 00:15:00.858 "name": "BaseBdev3", 00:15:00.858 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:00.858 "is_configured": true, 00:15:00.858 "data_offset": 0, 00:15:00.858 "data_size": 65536 00:15:00.858 }, 00:15:00.858 { 00:15:00.858 "name": "BaseBdev4", 00:15:00.858 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:00.858 "is_configured": true, 00:15:00.858 "data_offset": 0, 00:15:00.858 "data_size": 65536 00:15:00.858 } 00:15:00.858 ] 00:15:00.858 }' 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.858 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 [2024-11-20 07:11:43.268415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.117 "name": "Existed_Raid", 00:15:01.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.117 "strip_size_kb": 0, 00:15:01.117 "state": "configuring", 00:15:01.117 "raid_level": "raid1", 00:15:01.117 "superblock": false, 00:15:01.117 "num_base_bdevs": 4, 00:15:01.117 "num_base_bdevs_discovered": 2, 00:15:01.117 "num_base_bdevs_operational": 4, 00:15:01.117 "base_bdevs_list": [ 00:15:01.117 { 00:15:01.117 "name": "BaseBdev1", 00:15:01.117 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:01.117 "is_configured": true, 00:15:01.117 "data_offset": 0, 00:15:01.117 "data_size": 65536 00:15:01.117 }, 00:15:01.117 { 00:15:01.117 "name": null, 00:15:01.117 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:01.117 "is_configured": false, 00:15:01.117 "data_offset": 0, 00:15:01.117 "data_size": 65536 00:15:01.117 }, 00:15:01.117 { 00:15:01.117 "name": null, 00:15:01.117 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:01.117 "is_configured": false, 00:15:01.117 "data_offset": 0, 00:15:01.117 "data_size": 65536 00:15:01.117 }, 00:15:01.117 { 00:15:01.117 "name": "BaseBdev4", 00:15:01.117 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:01.117 "is_configured": true, 00:15:01.117 "data_offset": 0, 00:15:01.117 "data_size": 65536 00:15:01.117 } 00:15:01.117 ] 00:15:01.117 }' 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.117 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 [2024-11-20 07:11:43.791521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.685 "name": "Existed_Raid", 00:15:01.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.685 "strip_size_kb": 0, 00:15:01.685 "state": "configuring", 00:15:01.685 "raid_level": "raid1", 00:15:01.685 "superblock": false, 00:15:01.685 "num_base_bdevs": 4, 00:15:01.685 "num_base_bdevs_discovered": 3, 00:15:01.685 "num_base_bdevs_operational": 4, 00:15:01.685 "base_bdevs_list": [ 00:15:01.685 { 00:15:01.685 "name": "BaseBdev1", 00:15:01.685 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:01.685 "is_configured": true, 00:15:01.685 "data_offset": 0, 00:15:01.685 "data_size": 65536 00:15:01.685 }, 00:15:01.685 { 00:15:01.685 "name": null, 00:15:01.685 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:01.685 "is_configured": false, 00:15:01.685 "data_offset": 0, 00:15:01.685 "data_size": 65536 00:15:01.685 }, 00:15:01.685 { 00:15:01.685 "name": "BaseBdev3", 00:15:01.685 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:01.685 "is_configured": true, 00:15:01.685 "data_offset": 0, 00:15:01.685 "data_size": 65536 00:15:01.685 }, 00:15:01.685 { 00:15:01.685 "name": "BaseBdev4", 00:15:01.685 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:01.685 "is_configured": true, 00:15:01.685 "data_offset": 0, 00:15:01.685 "data_size": 65536 00:15:01.685 } 00:15:01.685 ] 00:15:01.685 }' 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.685 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.254 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.254 [2024-11-20 07:11:44.270759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.255 "name": "Existed_Raid", 00:15:02.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.255 "strip_size_kb": 0, 00:15:02.255 "state": "configuring", 00:15:02.255 "raid_level": "raid1", 00:15:02.255 "superblock": false, 00:15:02.255 "num_base_bdevs": 4, 00:15:02.255 "num_base_bdevs_discovered": 2, 00:15:02.255 "num_base_bdevs_operational": 4, 00:15:02.255 "base_bdevs_list": [ 00:15:02.255 { 00:15:02.255 "name": null, 00:15:02.255 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:02.255 "is_configured": false, 00:15:02.255 "data_offset": 0, 00:15:02.255 "data_size": 65536 00:15:02.255 }, 00:15:02.255 { 00:15:02.255 "name": null, 00:15:02.255 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:02.255 "is_configured": false, 00:15:02.255 "data_offset": 0, 00:15:02.255 "data_size": 65536 00:15:02.255 }, 00:15:02.255 { 00:15:02.255 "name": "BaseBdev3", 00:15:02.255 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:02.255 "is_configured": true, 00:15:02.255 "data_offset": 0, 00:15:02.255 "data_size": 65536 00:15:02.255 }, 00:15:02.255 { 00:15:02.255 "name": "BaseBdev4", 00:15:02.255 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:02.255 "is_configured": true, 00:15:02.255 "data_offset": 0, 00:15:02.255 "data_size": 65536 00:15:02.255 } 00:15:02.255 ] 00:15:02.255 }' 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.255 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 [2024-11-20 07:11:44.906360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.823 "name": "Existed_Raid", 00:15:02.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.823 "strip_size_kb": 0, 00:15:02.823 "state": "configuring", 00:15:02.823 "raid_level": "raid1", 00:15:02.823 "superblock": false, 00:15:02.823 "num_base_bdevs": 4, 00:15:02.823 "num_base_bdevs_discovered": 3, 00:15:02.823 "num_base_bdevs_operational": 4, 00:15:02.823 "base_bdevs_list": [ 00:15:02.823 { 00:15:02.823 "name": null, 00:15:02.823 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:02.823 "is_configured": false, 00:15:02.823 "data_offset": 0, 00:15:02.823 "data_size": 65536 00:15:02.823 }, 00:15:02.823 { 00:15:02.823 "name": "BaseBdev2", 00:15:02.823 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:02.823 "is_configured": true, 00:15:02.823 "data_offset": 0, 00:15:02.823 "data_size": 65536 00:15:02.823 }, 00:15:02.823 { 00:15:02.823 "name": "BaseBdev3", 00:15:02.823 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:02.823 "is_configured": true, 00:15:02.823 "data_offset": 0, 00:15:02.823 "data_size": 65536 00:15:02.823 }, 00:15:02.823 { 00:15:02.823 "name": "BaseBdev4", 00:15:02.823 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:02.823 "is_configured": true, 00:15:02.823 "data_offset": 0, 00:15:02.823 "data_size": 65536 00:15:02.823 } 00:15:02.823 ] 00:15:02.823 }' 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.823 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2239d926-eb27-4eba-ab95-ddf990fad8dd 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 [2024-11-20 07:11:45.571027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.433 [2024-11-20 07:11:45.571115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.433 [2024-11-20 07:11:45.571127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.433 [2024-11-20 07:11:45.571484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:03.433 [2024-11-20 07:11:45.571690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.433 [2024-11-20 07:11:45.571708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:03.433 [2024-11-20 07:11:45.572015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.433 NewBaseBdev 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.433 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.433 [ 00:15:03.433 { 00:15:03.433 "name": "NewBaseBdev", 00:15:03.433 "aliases": [ 00:15:03.433 "2239d926-eb27-4eba-ab95-ddf990fad8dd" 00:15:03.433 ], 00:15:03.433 "product_name": "Malloc disk", 00:15:03.433 "block_size": 512, 00:15:03.433 "num_blocks": 65536, 00:15:03.433 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:03.433 "assigned_rate_limits": { 00:15:03.433 "rw_ios_per_sec": 0, 00:15:03.433 "rw_mbytes_per_sec": 0, 00:15:03.433 "r_mbytes_per_sec": 0, 00:15:03.433 "w_mbytes_per_sec": 0 00:15:03.433 }, 00:15:03.433 "claimed": true, 00:15:03.433 "claim_type": "exclusive_write", 00:15:03.433 "zoned": false, 00:15:03.433 "supported_io_types": { 00:15:03.433 "read": true, 00:15:03.433 "write": true, 00:15:03.433 "unmap": true, 00:15:03.433 "flush": true, 00:15:03.433 "reset": true, 00:15:03.433 "nvme_admin": false, 00:15:03.433 "nvme_io": false, 00:15:03.433 "nvme_io_md": false, 00:15:03.434 "write_zeroes": true, 00:15:03.434 "zcopy": true, 00:15:03.434 "get_zone_info": false, 00:15:03.434 "zone_management": false, 00:15:03.434 "zone_append": false, 00:15:03.434 "compare": false, 00:15:03.434 "compare_and_write": false, 00:15:03.434 "abort": true, 00:15:03.434 "seek_hole": false, 00:15:03.434 "seek_data": false, 00:15:03.434 "copy": true, 00:15:03.434 "nvme_iov_md": false 00:15:03.434 }, 00:15:03.434 "memory_domains": [ 00:15:03.434 { 00:15:03.434 "dma_device_id": "system", 00:15:03.434 "dma_device_type": 1 00:15:03.434 }, 00:15:03.434 { 00:15:03.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.434 "dma_device_type": 2 00:15:03.434 } 00:15:03.434 ], 00:15:03.434 "driver_specific": {} 00:15:03.434 } 00:15:03.434 ] 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.434 "name": "Existed_Raid", 00:15:03.434 "uuid": "949d9a6a-b590-4b4f-ae2c-33f54fab66b5", 00:15:03.434 "strip_size_kb": 0, 00:15:03.434 "state": "online", 00:15:03.434 "raid_level": "raid1", 00:15:03.434 "superblock": false, 00:15:03.434 "num_base_bdevs": 4, 00:15:03.434 "num_base_bdevs_discovered": 4, 00:15:03.434 "num_base_bdevs_operational": 4, 00:15:03.434 "base_bdevs_list": [ 00:15:03.434 { 00:15:03.434 "name": "NewBaseBdev", 00:15:03.434 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:03.434 "is_configured": true, 00:15:03.434 "data_offset": 0, 00:15:03.434 "data_size": 65536 00:15:03.434 }, 00:15:03.434 { 00:15:03.434 "name": "BaseBdev2", 00:15:03.434 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:03.434 "is_configured": true, 00:15:03.434 "data_offset": 0, 00:15:03.434 "data_size": 65536 00:15:03.434 }, 00:15:03.434 { 00:15:03.434 "name": "BaseBdev3", 00:15:03.434 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:03.434 "is_configured": true, 00:15:03.434 "data_offset": 0, 00:15:03.434 "data_size": 65536 00:15:03.434 }, 00:15:03.434 { 00:15:03.434 "name": "BaseBdev4", 00:15:03.434 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:03.434 "is_configured": true, 00:15:03.434 "data_offset": 0, 00:15:03.434 "data_size": 65536 00:15:03.434 } 00:15:03.434 ] 00:15:03.434 }' 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.434 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.002 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.003 [2024-11-20 07:11:46.074749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.003 "name": "Existed_Raid", 00:15:04.003 "aliases": [ 00:15:04.003 "949d9a6a-b590-4b4f-ae2c-33f54fab66b5" 00:15:04.003 ], 00:15:04.003 "product_name": "Raid Volume", 00:15:04.003 "block_size": 512, 00:15:04.003 "num_blocks": 65536, 00:15:04.003 "uuid": "949d9a6a-b590-4b4f-ae2c-33f54fab66b5", 00:15:04.003 "assigned_rate_limits": { 00:15:04.003 "rw_ios_per_sec": 0, 00:15:04.003 "rw_mbytes_per_sec": 0, 00:15:04.003 "r_mbytes_per_sec": 0, 00:15:04.003 "w_mbytes_per_sec": 0 00:15:04.003 }, 00:15:04.003 "claimed": false, 00:15:04.003 "zoned": false, 00:15:04.003 "supported_io_types": { 00:15:04.003 "read": true, 00:15:04.003 "write": true, 00:15:04.003 "unmap": false, 00:15:04.003 "flush": false, 00:15:04.003 "reset": true, 00:15:04.003 "nvme_admin": false, 00:15:04.003 "nvme_io": false, 00:15:04.003 "nvme_io_md": false, 00:15:04.003 "write_zeroes": true, 00:15:04.003 "zcopy": false, 00:15:04.003 "get_zone_info": false, 00:15:04.003 "zone_management": false, 00:15:04.003 "zone_append": false, 00:15:04.003 "compare": false, 00:15:04.003 "compare_and_write": false, 00:15:04.003 "abort": false, 00:15:04.003 "seek_hole": false, 00:15:04.003 "seek_data": false, 00:15:04.003 "copy": false, 00:15:04.003 "nvme_iov_md": false 00:15:04.003 }, 00:15:04.003 "memory_domains": [ 00:15:04.003 { 00:15:04.003 "dma_device_id": "system", 00:15:04.003 "dma_device_type": 1 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.003 "dma_device_type": 2 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "system", 00:15:04.003 "dma_device_type": 1 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.003 "dma_device_type": 2 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "system", 00:15:04.003 "dma_device_type": 1 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.003 "dma_device_type": 2 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "system", 00:15:04.003 "dma_device_type": 1 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.003 "dma_device_type": 2 00:15:04.003 } 00:15:04.003 ], 00:15:04.003 "driver_specific": { 00:15:04.003 "raid": { 00:15:04.003 "uuid": "949d9a6a-b590-4b4f-ae2c-33f54fab66b5", 00:15:04.003 "strip_size_kb": 0, 00:15:04.003 "state": "online", 00:15:04.003 "raid_level": "raid1", 00:15:04.003 "superblock": false, 00:15:04.003 "num_base_bdevs": 4, 00:15:04.003 "num_base_bdevs_discovered": 4, 00:15:04.003 "num_base_bdevs_operational": 4, 00:15:04.003 "base_bdevs_list": [ 00:15:04.003 { 00:15:04.003 "name": "NewBaseBdev", 00:15:04.003 "uuid": "2239d926-eb27-4eba-ab95-ddf990fad8dd", 00:15:04.003 "is_configured": true, 00:15:04.003 "data_offset": 0, 00:15:04.003 "data_size": 65536 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "name": "BaseBdev2", 00:15:04.003 "uuid": "cf0da111-a2aa-44c4-b6db-6d50945dd161", 00:15:04.003 "is_configured": true, 00:15:04.003 "data_offset": 0, 00:15:04.003 "data_size": 65536 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "name": "BaseBdev3", 00:15:04.003 "uuid": "69164eee-2afe-42d8-9ab3-0e8ba85c30cd", 00:15:04.003 "is_configured": true, 00:15:04.003 "data_offset": 0, 00:15:04.003 "data_size": 65536 00:15:04.003 }, 00:15:04.003 { 00:15:04.003 "name": "BaseBdev4", 00:15:04.003 "uuid": "680b08b3-82b5-4daa-8950-3470c23762d2", 00:15:04.003 "is_configured": true, 00:15:04.003 "data_offset": 0, 00:15:04.003 "data_size": 65536 00:15:04.003 } 00:15:04.003 ] 00:15:04.003 } 00:15:04.003 } 00:15:04.003 }' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:04.003 BaseBdev2 00:15:04.003 BaseBdev3 00:15:04.003 BaseBdev4' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.003 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.262 [2024-11-20 07:11:46.401732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.262 [2024-11-20 07:11:46.401776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.262 [2024-11-20 07:11:46.401906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.262 [2024-11-20 07:11:46.402302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.262 [2024-11-20 07:11:46.402329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73528 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73528 ']' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73528 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73528 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.262 killing process with pid 73528 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73528' 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73528 00:15:04.262 [2024-11-20 07:11:46.445454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.262 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73528 00:15:04.831 [2024-11-20 07:11:46.955385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:06.212 00:15:06.212 real 0m12.390s 00:15:06.212 user 0m19.237s 00:15:06.212 sys 0m2.340s 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.212 ************************************ 00:15:06.212 END TEST raid_state_function_test 00:15:06.212 ************************************ 00:15:06.212 07:11:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:06.212 07:11:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:06.212 07:11:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.212 07:11:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.212 ************************************ 00:15:06.212 START TEST raid_state_function_test_sb 00:15:06.212 ************************************ 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74205 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:06.212 Process raid pid: 74205 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74205' 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74205 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74205 ']' 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.212 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.472 [2024-11-20 07:11:48.534357] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:06.472 [2024-11-20 07:11:48.534463] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.472 [2024-11-20 07:11:48.695136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.731 [2024-11-20 07:11:48.849153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.990 [2024-11-20 07:11:49.105216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.990 [2024-11-20 07:11:49.105288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.250 [2024-11-20 07:11:49.454088] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.250 [2024-11-20 07:11:49.454169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.250 [2024-11-20 07:11:49.454182] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.250 [2024-11-20 07:11:49.454195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.250 [2024-11-20 07:11:49.454202] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.250 [2024-11-20 07:11:49.454213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.250 [2024-11-20 07:11:49.454227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:07.250 [2024-11-20 07:11:49.454238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.250 "name": "Existed_Raid", 00:15:07.250 "uuid": "a38bd0d5-98f3-482d-9e15-04319c4e8e37", 00:15:07.250 "strip_size_kb": 0, 00:15:07.250 "state": "configuring", 00:15:07.250 "raid_level": "raid1", 00:15:07.250 "superblock": true, 00:15:07.250 "num_base_bdevs": 4, 00:15:07.250 "num_base_bdevs_discovered": 0, 00:15:07.250 "num_base_bdevs_operational": 4, 00:15:07.250 "base_bdevs_list": [ 00:15:07.250 { 00:15:07.250 "name": "BaseBdev1", 00:15:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.250 "is_configured": false, 00:15:07.250 "data_offset": 0, 00:15:07.250 "data_size": 0 00:15:07.250 }, 00:15:07.250 { 00:15:07.250 "name": "BaseBdev2", 00:15:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.250 "is_configured": false, 00:15:07.250 "data_offset": 0, 00:15:07.250 "data_size": 0 00:15:07.250 }, 00:15:07.250 { 00:15:07.250 "name": "BaseBdev3", 00:15:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.250 "is_configured": false, 00:15:07.250 "data_offset": 0, 00:15:07.250 "data_size": 0 00:15:07.250 }, 00:15:07.250 { 00:15:07.250 "name": "BaseBdev4", 00:15:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.250 "is_configured": false, 00:15:07.250 "data_offset": 0, 00:15:07.250 "data_size": 0 00:15:07.250 } 00:15:07.250 ] 00:15:07.250 }' 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.250 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.818 [2024-11-20 07:11:49.897370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.818 [2024-11-20 07:11:49.897450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.818 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.818 [2024-11-20 07:11:49.905262] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.819 [2024-11-20 07:11:49.905327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.819 [2024-11-20 07:11:49.905354] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.819 [2024-11-20 07:11:49.905367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.819 [2024-11-20 07:11:49.905375] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.819 [2024-11-20 07:11:49.905385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.819 [2024-11-20 07:11:49.905392] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:07.819 [2024-11-20 07:11:49.905403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.819 [2024-11-20 07:11:49.958054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.819 BaseBdev1 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.819 [ 00:15:07.819 { 00:15:07.819 "name": "BaseBdev1", 00:15:07.819 "aliases": [ 00:15:07.819 "eff77734-2337-4d9f-8f62-f3fcf84fa8aa" 00:15:07.819 ], 00:15:07.819 "product_name": "Malloc disk", 00:15:07.819 "block_size": 512, 00:15:07.819 "num_blocks": 65536, 00:15:07.819 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:07.819 "assigned_rate_limits": { 00:15:07.819 "rw_ios_per_sec": 0, 00:15:07.819 "rw_mbytes_per_sec": 0, 00:15:07.819 "r_mbytes_per_sec": 0, 00:15:07.819 "w_mbytes_per_sec": 0 00:15:07.819 }, 00:15:07.819 "claimed": true, 00:15:07.819 "claim_type": "exclusive_write", 00:15:07.819 "zoned": false, 00:15:07.819 "supported_io_types": { 00:15:07.819 "read": true, 00:15:07.819 "write": true, 00:15:07.819 "unmap": true, 00:15:07.819 "flush": true, 00:15:07.819 "reset": true, 00:15:07.819 "nvme_admin": false, 00:15:07.819 "nvme_io": false, 00:15:07.819 "nvme_io_md": false, 00:15:07.819 "write_zeroes": true, 00:15:07.819 "zcopy": true, 00:15:07.819 "get_zone_info": false, 00:15:07.819 "zone_management": false, 00:15:07.819 "zone_append": false, 00:15:07.819 "compare": false, 00:15:07.819 "compare_and_write": false, 00:15:07.819 "abort": true, 00:15:07.819 "seek_hole": false, 00:15:07.819 "seek_data": false, 00:15:07.819 "copy": true, 00:15:07.819 "nvme_iov_md": false 00:15:07.819 }, 00:15:07.819 "memory_domains": [ 00:15:07.819 { 00:15:07.819 "dma_device_id": "system", 00:15:07.819 "dma_device_type": 1 00:15:07.819 }, 00:15:07.819 { 00:15:07.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.819 "dma_device_type": 2 00:15:07.819 } 00:15:07.819 ], 00:15:07.819 "driver_specific": {} 00:15:07.819 } 00:15:07.819 ] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.819 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.819 "name": "Existed_Raid", 00:15:07.819 "uuid": "d94eb9ce-6f29-480f-8003-67e53e0fab83", 00:15:07.819 "strip_size_kb": 0, 00:15:07.819 "state": "configuring", 00:15:07.819 "raid_level": "raid1", 00:15:07.819 "superblock": true, 00:15:07.819 "num_base_bdevs": 4, 00:15:07.819 "num_base_bdevs_discovered": 1, 00:15:07.819 "num_base_bdevs_operational": 4, 00:15:07.819 "base_bdevs_list": [ 00:15:07.819 { 00:15:07.819 "name": "BaseBdev1", 00:15:07.819 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:07.819 "is_configured": true, 00:15:07.819 "data_offset": 2048, 00:15:07.819 "data_size": 63488 00:15:07.819 }, 00:15:07.819 { 00:15:07.819 "name": "BaseBdev2", 00:15:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.819 "is_configured": false, 00:15:07.819 "data_offset": 0, 00:15:07.819 "data_size": 0 00:15:07.819 }, 00:15:07.819 { 00:15:07.819 "name": "BaseBdev3", 00:15:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.819 "is_configured": false, 00:15:07.819 "data_offset": 0, 00:15:07.819 "data_size": 0 00:15:07.819 }, 00:15:07.819 { 00:15:07.819 "name": "BaseBdev4", 00:15:07.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.819 "is_configured": false, 00:15:07.819 "data_offset": 0, 00:15:07.819 "data_size": 0 00:15:07.819 } 00:15:07.819 ] 00:15:07.819 }' 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.819 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.389 [2024-11-20 07:11:50.413423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.389 [2024-11-20 07:11:50.413506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.389 [2024-11-20 07:11:50.425485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.389 [2024-11-20 07:11:50.427639] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.389 [2024-11-20 07:11:50.427689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.389 [2024-11-20 07:11:50.427700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.389 [2024-11-20 07:11:50.427710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.389 [2024-11-20 07:11:50.427717] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.389 [2024-11-20 07:11:50.427726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.389 "name": "Existed_Raid", 00:15:08.389 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:08.389 "strip_size_kb": 0, 00:15:08.389 "state": "configuring", 00:15:08.389 "raid_level": "raid1", 00:15:08.389 "superblock": true, 00:15:08.389 "num_base_bdevs": 4, 00:15:08.389 "num_base_bdevs_discovered": 1, 00:15:08.389 "num_base_bdevs_operational": 4, 00:15:08.389 "base_bdevs_list": [ 00:15:08.389 { 00:15:08.389 "name": "BaseBdev1", 00:15:08.389 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:08.389 "is_configured": true, 00:15:08.389 "data_offset": 2048, 00:15:08.389 "data_size": 63488 00:15:08.389 }, 00:15:08.389 { 00:15:08.389 "name": "BaseBdev2", 00:15:08.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.389 "is_configured": false, 00:15:08.389 "data_offset": 0, 00:15:08.389 "data_size": 0 00:15:08.389 }, 00:15:08.389 { 00:15:08.389 "name": "BaseBdev3", 00:15:08.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.389 "is_configured": false, 00:15:08.389 "data_offset": 0, 00:15:08.389 "data_size": 0 00:15:08.389 }, 00:15:08.389 { 00:15:08.389 "name": "BaseBdev4", 00:15:08.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.389 "is_configured": false, 00:15:08.389 "data_offset": 0, 00:15:08.389 "data_size": 0 00:15:08.389 } 00:15:08.389 ] 00:15:08.389 }' 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.389 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.649 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.649 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.649 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.908 [2024-11-20 07:11:50.956178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.908 BaseBdev2 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.908 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.909 [ 00:15:08.909 { 00:15:08.909 "name": "BaseBdev2", 00:15:08.909 "aliases": [ 00:15:08.909 "0be6f759-f598-42db-a6aa-e0432003c1ab" 00:15:08.909 ], 00:15:08.909 "product_name": "Malloc disk", 00:15:08.909 "block_size": 512, 00:15:08.909 "num_blocks": 65536, 00:15:08.909 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:08.909 "assigned_rate_limits": { 00:15:08.909 "rw_ios_per_sec": 0, 00:15:08.909 "rw_mbytes_per_sec": 0, 00:15:08.909 "r_mbytes_per_sec": 0, 00:15:08.909 "w_mbytes_per_sec": 0 00:15:08.909 }, 00:15:08.909 "claimed": true, 00:15:08.909 "claim_type": "exclusive_write", 00:15:08.909 "zoned": false, 00:15:08.909 "supported_io_types": { 00:15:08.909 "read": true, 00:15:08.909 "write": true, 00:15:08.909 "unmap": true, 00:15:08.909 "flush": true, 00:15:08.909 "reset": true, 00:15:08.909 "nvme_admin": false, 00:15:08.909 "nvme_io": false, 00:15:08.909 "nvme_io_md": false, 00:15:08.909 "write_zeroes": true, 00:15:08.909 "zcopy": true, 00:15:08.909 "get_zone_info": false, 00:15:08.909 "zone_management": false, 00:15:08.909 "zone_append": false, 00:15:08.909 "compare": false, 00:15:08.909 "compare_and_write": false, 00:15:08.909 "abort": true, 00:15:08.909 "seek_hole": false, 00:15:08.909 "seek_data": false, 00:15:08.909 "copy": true, 00:15:08.909 "nvme_iov_md": false 00:15:08.909 }, 00:15:08.909 "memory_domains": [ 00:15:08.909 { 00:15:08.909 "dma_device_id": "system", 00:15:08.909 "dma_device_type": 1 00:15:08.909 }, 00:15:08.909 { 00:15:08.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.909 "dma_device_type": 2 00:15:08.909 } 00:15:08.909 ], 00:15:08.909 "driver_specific": {} 00:15:08.909 } 00:15:08.909 ] 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.909 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.909 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.909 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.909 "name": "Existed_Raid", 00:15:08.909 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:08.909 "strip_size_kb": 0, 00:15:08.909 "state": "configuring", 00:15:08.909 "raid_level": "raid1", 00:15:08.909 "superblock": true, 00:15:08.909 "num_base_bdevs": 4, 00:15:08.909 "num_base_bdevs_discovered": 2, 00:15:08.909 "num_base_bdevs_operational": 4, 00:15:08.909 "base_bdevs_list": [ 00:15:08.909 { 00:15:08.909 "name": "BaseBdev1", 00:15:08.909 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:08.909 "is_configured": true, 00:15:08.909 "data_offset": 2048, 00:15:08.909 "data_size": 63488 00:15:08.909 }, 00:15:08.909 { 00:15:08.909 "name": "BaseBdev2", 00:15:08.909 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:08.909 "is_configured": true, 00:15:08.909 "data_offset": 2048, 00:15:08.909 "data_size": 63488 00:15:08.909 }, 00:15:08.909 { 00:15:08.909 "name": "BaseBdev3", 00:15:08.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.909 "is_configured": false, 00:15:08.909 "data_offset": 0, 00:15:08.909 "data_size": 0 00:15:08.909 }, 00:15:08.909 { 00:15:08.909 "name": "BaseBdev4", 00:15:08.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.909 "is_configured": false, 00:15:08.909 "data_offset": 0, 00:15:08.909 "data_size": 0 00:15:08.909 } 00:15:08.909 ] 00:15:08.909 }' 00:15:08.909 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.909 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.168 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.168 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.168 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.428 [2024-11-20 07:11:51.482649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.428 BaseBdev3 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.428 [ 00:15:09.428 { 00:15:09.428 "name": "BaseBdev3", 00:15:09.428 "aliases": [ 00:15:09.428 "886e1de8-561d-4d9c-8c88-a9baed6aed0b" 00:15:09.428 ], 00:15:09.428 "product_name": "Malloc disk", 00:15:09.428 "block_size": 512, 00:15:09.428 "num_blocks": 65536, 00:15:09.428 "uuid": "886e1de8-561d-4d9c-8c88-a9baed6aed0b", 00:15:09.428 "assigned_rate_limits": { 00:15:09.428 "rw_ios_per_sec": 0, 00:15:09.428 "rw_mbytes_per_sec": 0, 00:15:09.428 "r_mbytes_per_sec": 0, 00:15:09.428 "w_mbytes_per_sec": 0 00:15:09.428 }, 00:15:09.428 "claimed": true, 00:15:09.428 "claim_type": "exclusive_write", 00:15:09.428 "zoned": false, 00:15:09.428 "supported_io_types": { 00:15:09.428 "read": true, 00:15:09.428 "write": true, 00:15:09.428 "unmap": true, 00:15:09.428 "flush": true, 00:15:09.428 "reset": true, 00:15:09.428 "nvme_admin": false, 00:15:09.428 "nvme_io": false, 00:15:09.428 "nvme_io_md": false, 00:15:09.428 "write_zeroes": true, 00:15:09.428 "zcopy": true, 00:15:09.428 "get_zone_info": false, 00:15:09.428 "zone_management": false, 00:15:09.428 "zone_append": false, 00:15:09.428 "compare": false, 00:15:09.428 "compare_and_write": false, 00:15:09.428 "abort": true, 00:15:09.428 "seek_hole": false, 00:15:09.428 "seek_data": false, 00:15:09.428 "copy": true, 00:15:09.428 "nvme_iov_md": false 00:15:09.428 }, 00:15:09.428 "memory_domains": [ 00:15:09.428 { 00:15:09.428 "dma_device_id": "system", 00:15:09.428 "dma_device_type": 1 00:15:09.428 }, 00:15:09.428 { 00:15:09.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.428 "dma_device_type": 2 00:15:09.428 } 00:15:09.428 ], 00:15:09.428 "driver_specific": {} 00:15:09.428 } 00:15:09.428 ] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.428 "name": "Existed_Raid", 00:15:09.428 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:09.428 "strip_size_kb": 0, 00:15:09.428 "state": "configuring", 00:15:09.428 "raid_level": "raid1", 00:15:09.428 "superblock": true, 00:15:09.428 "num_base_bdevs": 4, 00:15:09.428 "num_base_bdevs_discovered": 3, 00:15:09.428 "num_base_bdevs_operational": 4, 00:15:09.428 "base_bdevs_list": [ 00:15:09.428 { 00:15:09.428 "name": "BaseBdev1", 00:15:09.428 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:09.428 "is_configured": true, 00:15:09.428 "data_offset": 2048, 00:15:09.428 "data_size": 63488 00:15:09.428 }, 00:15:09.428 { 00:15:09.428 "name": "BaseBdev2", 00:15:09.428 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:09.428 "is_configured": true, 00:15:09.428 "data_offset": 2048, 00:15:09.428 "data_size": 63488 00:15:09.428 }, 00:15:09.428 { 00:15:09.428 "name": "BaseBdev3", 00:15:09.428 "uuid": "886e1de8-561d-4d9c-8c88-a9baed6aed0b", 00:15:09.428 "is_configured": true, 00:15:09.428 "data_offset": 2048, 00:15:09.428 "data_size": 63488 00:15:09.428 }, 00:15:09.428 { 00:15:09.428 "name": "BaseBdev4", 00:15:09.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.428 "is_configured": false, 00:15:09.428 "data_offset": 0, 00:15:09.428 "data_size": 0 00:15:09.428 } 00:15:09.428 ] 00:15:09.428 }' 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.428 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.688 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:09.688 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.688 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.948 [2024-11-20 07:11:51.990289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.948 [2024-11-20 07:11:51.990627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:09.948 [2024-11-20 07:11:51.990644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.948 [2024-11-20 07:11:51.990955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:09.948 [2024-11-20 07:11:51.991128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:09.948 [2024-11-20 07:11:51.991147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:09.948 [2024-11-20 07:11:51.991312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.948 BaseBdev4 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.948 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.948 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.948 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:09.948 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.948 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.948 [ 00:15:09.948 { 00:15:09.948 "name": "BaseBdev4", 00:15:09.948 "aliases": [ 00:15:09.948 "01490046-9d17-426a-855a-75808edb3fc0" 00:15:09.948 ], 00:15:09.948 "product_name": "Malloc disk", 00:15:09.948 "block_size": 512, 00:15:09.948 "num_blocks": 65536, 00:15:09.948 "uuid": "01490046-9d17-426a-855a-75808edb3fc0", 00:15:09.948 "assigned_rate_limits": { 00:15:09.948 "rw_ios_per_sec": 0, 00:15:09.948 "rw_mbytes_per_sec": 0, 00:15:09.948 "r_mbytes_per_sec": 0, 00:15:09.948 "w_mbytes_per_sec": 0 00:15:09.948 }, 00:15:09.948 "claimed": true, 00:15:09.948 "claim_type": "exclusive_write", 00:15:09.948 "zoned": false, 00:15:09.948 "supported_io_types": { 00:15:09.948 "read": true, 00:15:09.948 "write": true, 00:15:09.948 "unmap": true, 00:15:09.948 "flush": true, 00:15:09.948 "reset": true, 00:15:09.948 "nvme_admin": false, 00:15:09.948 "nvme_io": false, 00:15:09.948 "nvme_io_md": false, 00:15:09.948 "write_zeroes": true, 00:15:09.948 "zcopy": true, 00:15:09.948 "get_zone_info": false, 00:15:09.948 "zone_management": false, 00:15:09.948 "zone_append": false, 00:15:09.948 "compare": false, 00:15:09.948 "compare_and_write": false, 00:15:09.948 "abort": true, 00:15:09.948 "seek_hole": false, 00:15:09.948 "seek_data": false, 00:15:09.948 "copy": true, 00:15:09.948 "nvme_iov_md": false 00:15:09.948 }, 00:15:09.948 "memory_domains": [ 00:15:09.948 { 00:15:09.948 "dma_device_id": "system", 00:15:09.948 "dma_device_type": 1 00:15:09.948 }, 00:15:09.948 { 00:15:09.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.948 "dma_device_type": 2 00:15:09.948 } 00:15:09.948 ], 00:15:09.949 "driver_specific": {} 00:15:09.949 } 00:15:09.949 ] 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.949 "name": "Existed_Raid", 00:15:09.949 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:09.949 "strip_size_kb": 0, 00:15:09.949 "state": "online", 00:15:09.949 "raid_level": "raid1", 00:15:09.949 "superblock": true, 00:15:09.949 "num_base_bdevs": 4, 00:15:09.949 "num_base_bdevs_discovered": 4, 00:15:09.949 "num_base_bdevs_operational": 4, 00:15:09.949 "base_bdevs_list": [ 00:15:09.949 { 00:15:09.949 "name": "BaseBdev1", 00:15:09.949 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 2048, 00:15:09.949 "data_size": 63488 00:15:09.949 }, 00:15:09.949 { 00:15:09.949 "name": "BaseBdev2", 00:15:09.949 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 2048, 00:15:09.949 "data_size": 63488 00:15:09.949 }, 00:15:09.949 { 00:15:09.949 "name": "BaseBdev3", 00:15:09.949 "uuid": "886e1de8-561d-4d9c-8c88-a9baed6aed0b", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 2048, 00:15:09.949 "data_size": 63488 00:15:09.949 }, 00:15:09.949 { 00:15:09.949 "name": "BaseBdev4", 00:15:09.949 "uuid": "01490046-9d17-426a-855a-75808edb3fc0", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 2048, 00:15:09.949 "data_size": 63488 00:15:09.949 } 00:15:09.949 ] 00:15:09.949 }' 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.949 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.209 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.468 [2024-11-20 07:11:52.481887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.468 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.468 "name": "Existed_Raid", 00:15:10.468 "aliases": [ 00:15:10.468 "6daab258-2c46-45ab-b21f-59d788ba8f0a" 00:15:10.468 ], 00:15:10.468 "product_name": "Raid Volume", 00:15:10.468 "block_size": 512, 00:15:10.468 "num_blocks": 63488, 00:15:10.468 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:10.468 "assigned_rate_limits": { 00:15:10.468 "rw_ios_per_sec": 0, 00:15:10.468 "rw_mbytes_per_sec": 0, 00:15:10.468 "r_mbytes_per_sec": 0, 00:15:10.468 "w_mbytes_per_sec": 0 00:15:10.468 }, 00:15:10.468 "claimed": false, 00:15:10.468 "zoned": false, 00:15:10.468 "supported_io_types": { 00:15:10.468 "read": true, 00:15:10.468 "write": true, 00:15:10.468 "unmap": false, 00:15:10.468 "flush": false, 00:15:10.468 "reset": true, 00:15:10.468 "nvme_admin": false, 00:15:10.468 "nvme_io": false, 00:15:10.468 "nvme_io_md": false, 00:15:10.468 "write_zeroes": true, 00:15:10.468 "zcopy": false, 00:15:10.468 "get_zone_info": false, 00:15:10.468 "zone_management": false, 00:15:10.468 "zone_append": false, 00:15:10.468 "compare": false, 00:15:10.468 "compare_and_write": false, 00:15:10.468 "abort": false, 00:15:10.468 "seek_hole": false, 00:15:10.468 "seek_data": false, 00:15:10.468 "copy": false, 00:15:10.468 "nvme_iov_md": false 00:15:10.468 }, 00:15:10.468 "memory_domains": [ 00:15:10.468 { 00:15:10.468 "dma_device_id": "system", 00:15:10.468 "dma_device_type": 1 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.468 "dma_device_type": 2 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "system", 00:15:10.468 "dma_device_type": 1 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.468 "dma_device_type": 2 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "system", 00:15:10.468 "dma_device_type": 1 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.468 "dma_device_type": 2 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "system", 00:15:10.468 "dma_device_type": 1 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.468 "dma_device_type": 2 00:15:10.468 } 00:15:10.468 ], 00:15:10.468 "driver_specific": { 00:15:10.468 "raid": { 00:15:10.468 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:10.468 "strip_size_kb": 0, 00:15:10.468 "state": "online", 00:15:10.468 "raid_level": "raid1", 00:15:10.468 "superblock": true, 00:15:10.468 "num_base_bdevs": 4, 00:15:10.468 "num_base_bdevs_discovered": 4, 00:15:10.468 "num_base_bdevs_operational": 4, 00:15:10.468 "base_bdevs_list": [ 00:15:10.468 { 00:15:10.468 "name": "BaseBdev1", 00:15:10.468 "uuid": "eff77734-2337-4d9f-8f62-f3fcf84fa8aa", 00:15:10.468 "is_configured": true, 00:15:10.468 "data_offset": 2048, 00:15:10.468 "data_size": 63488 00:15:10.468 }, 00:15:10.468 { 00:15:10.468 "name": "BaseBdev2", 00:15:10.468 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:10.468 "is_configured": true, 00:15:10.468 "data_offset": 2048, 00:15:10.469 "data_size": 63488 00:15:10.469 }, 00:15:10.469 { 00:15:10.469 "name": "BaseBdev3", 00:15:10.469 "uuid": "886e1de8-561d-4d9c-8c88-a9baed6aed0b", 00:15:10.469 "is_configured": true, 00:15:10.469 "data_offset": 2048, 00:15:10.469 "data_size": 63488 00:15:10.469 }, 00:15:10.469 { 00:15:10.469 "name": "BaseBdev4", 00:15:10.469 "uuid": "01490046-9d17-426a-855a-75808edb3fc0", 00:15:10.469 "is_configured": true, 00:15:10.469 "data_offset": 2048, 00:15:10.469 "data_size": 63488 00:15:10.469 } 00:15:10.469 ] 00:15:10.469 } 00:15:10.469 } 00:15:10.469 }' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:10.469 BaseBdev2 00:15:10.469 BaseBdev3 00:15:10.469 BaseBdev4' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.469 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 [2024-11-20 07:11:52.837019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.989 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.989 "name": "Existed_Raid", 00:15:10.989 "uuid": "6daab258-2c46-45ab-b21f-59d788ba8f0a", 00:15:10.989 "strip_size_kb": 0, 00:15:10.989 "state": "online", 00:15:10.989 "raid_level": "raid1", 00:15:10.989 "superblock": true, 00:15:10.989 "num_base_bdevs": 4, 00:15:10.989 "num_base_bdevs_discovered": 3, 00:15:10.989 "num_base_bdevs_operational": 3, 00:15:10.989 "base_bdevs_list": [ 00:15:10.989 { 00:15:10.989 "name": null, 00:15:10.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.989 "is_configured": false, 00:15:10.989 "data_offset": 0, 00:15:10.989 "data_size": 63488 00:15:10.989 }, 00:15:10.989 { 00:15:10.989 "name": "BaseBdev2", 00:15:10.989 "uuid": "0be6f759-f598-42db-a6aa-e0432003c1ab", 00:15:10.989 "is_configured": true, 00:15:10.989 "data_offset": 2048, 00:15:10.989 "data_size": 63488 00:15:10.989 }, 00:15:10.989 { 00:15:10.989 "name": "BaseBdev3", 00:15:10.989 "uuid": "886e1de8-561d-4d9c-8c88-a9baed6aed0b", 00:15:10.989 "is_configured": true, 00:15:10.989 "data_offset": 2048, 00:15:10.989 "data_size": 63488 00:15:10.989 }, 00:15:10.989 { 00:15:10.989 "name": "BaseBdev4", 00:15:10.989 "uuid": "01490046-9d17-426a-855a-75808edb3fc0", 00:15:10.989 "is_configured": true, 00:15:10.989 "data_offset": 2048, 00:15:10.989 "data_size": 63488 00:15:10.989 } 00:15:10.989 ] 00:15:10.989 }' 00:15:10.989 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.989 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.248 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.248 [2024-11-20 07:11:53.487752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.507 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.508 [2024-11-20 07:11:53.654513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.508 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.768 [2024-11-20 07:11:53.820407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:11.768 [2024-11-20 07:11:53.820521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.768 [2024-11-20 07:11:53.928717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.768 [2024-11-20 07:11:53.928790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.768 [2024-11-20 07:11:53.928805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.768 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 BaseBdev2 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 [ 00:15:12.028 { 00:15:12.028 "name": "BaseBdev2", 00:15:12.028 "aliases": [ 00:15:12.028 "b0318cfa-9cd4-496f-96e7-78c2cab5ad98" 00:15:12.028 ], 00:15:12.028 "product_name": "Malloc disk", 00:15:12.028 "block_size": 512, 00:15:12.028 "num_blocks": 65536, 00:15:12.028 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:12.028 "assigned_rate_limits": { 00:15:12.028 "rw_ios_per_sec": 0, 00:15:12.028 "rw_mbytes_per_sec": 0, 00:15:12.028 "r_mbytes_per_sec": 0, 00:15:12.028 "w_mbytes_per_sec": 0 00:15:12.028 }, 00:15:12.028 "claimed": false, 00:15:12.028 "zoned": false, 00:15:12.028 "supported_io_types": { 00:15:12.028 "read": true, 00:15:12.028 "write": true, 00:15:12.028 "unmap": true, 00:15:12.028 "flush": true, 00:15:12.028 "reset": true, 00:15:12.028 "nvme_admin": false, 00:15:12.028 "nvme_io": false, 00:15:12.028 "nvme_io_md": false, 00:15:12.028 "write_zeroes": true, 00:15:12.028 "zcopy": true, 00:15:12.028 "get_zone_info": false, 00:15:12.028 "zone_management": false, 00:15:12.028 "zone_append": false, 00:15:12.028 "compare": false, 00:15:12.028 "compare_and_write": false, 00:15:12.028 "abort": true, 00:15:12.028 "seek_hole": false, 00:15:12.028 "seek_data": false, 00:15:12.028 "copy": true, 00:15:12.028 "nvme_iov_md": false 00:15:12.028 }, 00:15:12.028 "memory_domains": [ 00:15:12.028 { 00:15:12.028 "dma_device_id": "system", 00:15:12.028 "dma_device_type": 1 00:15:12.028 }, 00:15:12.028 { 00:15:12.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.028 "dma_device_type": 2 00:15:12.028 } 00:15:12.028 ], 00:15:12.028 "driver_specific": {} 00:15:12.028 } 00:15:12.028 ] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 BaseBdev3 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.028 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.028 [ 00:15:12.028 { 00:15:12.028 "name": "BaseBdev3", 00:15:12.028 "aliases": [ 00:15:12.028 "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2" 00:15:12.028 ], 00:15:12.028 "product_name": "Malloc disk", 00:15:12.028 "block_size": 512, 00:15:12.028 "num_blocks": 65536, 00:15:12.028 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:12.028 "assigned_rate_limits": { 00:15:12.028 "rw_ios_per_sec": 0, 00:15:12.028 "rw_mbytes_per_sec": 0, 00:15:12.028 "r_mbytes_per_sec": 0, 00:15:12.028 "w_mbytes_per_sec": 0 00:15:12.028 }, 00:15:12.028 "claimed": false, 00:15:12.028 "zoned": false, 00:15:12.028 "supported_io_types": { 00:15:12.028 "read": true, 00:15:12.028 "write": true, 00:15:12.028 "unmap": true, 00:15:12.028 "flush": true, 00:15:12.028 "reset": true, 00:15:12.029 "nvme_admin": false, 00:15:12.029 "nvme_io": false, 00:15:12.029 "nvme_io_md": false, 00:15:12.029 "write_zeroes": true, 00:15:12.029 "zcopy": true, 00:15:12.029 "get_zone_info": false, 00:15:12.029 "zone_management": false, 00:15:12.029 "zone_append": false, 00:15:12.029 "compare": false, 00:15:12.029 "compare_and_write": false, 00:15:12.029 "abort": true, 00:15:12.029 "seek_hole": false, 00:15:12.029 "seek_data": false, 00:15:12.029 "copy": true, 00:15:12.029 "nvme_iov_md": false 00:15:12.029 }, 00:15:12.029 "memory_domains": [ 00:15:12.029 { 00:15:12.029 "dma_device_id": "system", 00:15:12.029 "dma_device_type": 1 00:15:12.029 }, 00:15:12.029 { 00:15:12.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.029 "dma_device_type": 2 00:15:12.029 } 00:15:12.029 ], 00:15:12.029 "driver_specific": {} 00:15:12.029 } 00:15:12.029 ] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 BaseBdev4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 [ 00:15:12.029 { 00:15:12.029 "name": "BaseBdev4", 00:15:12.029 "aliases": [ 00:15:12.029 "57488d1a-f4c9-4776-a8f8-563c60264b0d" 00:15:12.029 ], 00:15:12.029 "product_name": "Malloc disk", 00:15:12.029 "block_size": 512, 00:15:12.029 "num_blocks": 65536, 00:15:12.029 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:12.029 "assigned_rate_limits": { 00:15:12.029 "rw_ios_per_sec": 0, 00:15:12.029 "rw_mbytes_per_sec": 0, 00:15:12.029 "r_mbytes_per_sec": 0, 00:15:12.029 "w_mbytes_per_sec": 0 00:15:12.029 }, 00:15:12.029 "claimed": false, 00:15:12.029 "zoned": false, 00:15:12.029 "supported_io_types": { 00:15:12.029 "read": true, 00:15:12.029 "write": true, 00:15:12.029 "unmap": true, 00:15:12.029 "flush": true, 00:15:12.029 "reset": true, 00:15:12.029 "nvme_admin": false, 00:15:12.029 "nvme_io": false, 00:15:12.029 "nvme_io_md": false, 00:15:12.029 "write_zeroes": true, 00:15:12.029 "zcopy": true, 00:15:12.029 "get_zone_info": false, 00:15:12.029 "zone_management": false, 00:15:12.029 "zone_append": false, 00:15:12.029 "compare": false, 00:15:12.029 "compare_and_write": false, 00:15:12.029 "abort": true, 00:15:12.029 "seek_hole": false, 00:15:12.029 "seek_data": false, 00:15:12.029 "copy": true, 00:15:12.029 "nvme_iov_md": false 00:15:12.029 }, 00:15:12.029 "memory_domains": [ 00:15:12.029 { 00:15:12.029 "dma_device_id": "system", 00:15:12.029 "dma_device_type": 1 00:15:12.029 }, 00:15:12.029 { 00:15:12.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.029 "dma_device_type": 2 00:15:12.029 } 00:15:12.029 ], 00:15:12.029 "driver_specific": {} 00:15:12.029 } 00:15:12.029 ] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 [2024-11-20 07:11:54.231354] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.029 [2024-11-20 07:11:54.231416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.029 [2024-11-20 07:11:54.231440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.029 [2024-11-20 07:11:54.233468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.029 [2024-11-20 07:11:54.233522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.029 "name": "Existed_Raid", 00:15:12.029 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:12.029 "strip_size_kb": 0, 00:15:12.029 "state": "configuring", 00:15:12.029 "raid_level": "raid1", 00:15:12.029 "superblock": true, 00:15:12.029 "num_base_bdevs": 4, 00:15:12.029 "num_base_bdevs_discovered": 3, 00:15:12.029 "num_base_bdevs_operational": 4, 00:15:12.029 "base_bdevs_list": [ 00:15:12.029 { 00:15:12.029 "name": "BaseBdev1", 00:15:12.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.029 "is_configured": false, 00:15:12.029 "data_offset": 0, 00:15:12.029 "data_size": 0 00:15:12.029 }, 00:15:12.029 { 00:15:12.029 "name": "BaseBdev2", 00:15:12.029 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:12.029 "is_configured": true, 00:15:12.029 "data_offset": 2048, 00:15:12.029 "data_size": 63488 00:15:12.029 }, 00:15:12.029 { 00:15:12.029 "name": "BaseBdev3", 00:15:12.029 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:12.029 "is_configured": true, 00:15:12.029 "data_offset": 2048, 00:15:12.029 "data_size": 63488 00:15:12.029 }, 00:15:12.029 { 00:15:12.029 "name": "BaseBdev4", 00:15:12.029 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:12.029 "is_configured": true, 00:15:12.029 "data_offset": 2048, 00:15:12.029 "data_size": 63488 00:15:12.029 } 00:15:12.029 ] 00:15:12.029 }' 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.029 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.599 [2024-11-20 07:11:54.738489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.599 "name": "Existed_Raid", 00:15:12.599 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:12.599 "strip_size_kb": 0, 00:15:12.599 "state": "configuring", 00:15:12.599 "raid_level": "raid1", 00:15:12.599 "superblock": true, 00:15:12.599 "num_base_bdevs": 4, 00:15:12.599 "num_base_bdevs_discovered": 2, 00:15:12.599 "num_base_bdevs_operational": 4, 00:15:12.599 "base_bdevs_list": [ 00:15:12.599 { 00:15:12.599 "name": "BaseBdev1", 00:15:12.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.599 "is_configured": false, 00:15:12.599 "data_offset": 0, 00:15:12.599 "data_size": 0 00:15:12.599 }, 00:15:12.599 { 00:15:12.599 "name": null, 00:15:12.599 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:12.599 "is_configured": false, 00:15:12.599 "data_offset": 0, 00:15:12.599 "data_size": 63488 00:15:12.599 }, 00:15:12.599 { 00:15:12.599 "name": "BaseBdev3", 00:15:12.599 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:12.599 "is_configured": true, 00:15:12.599 "data_offset": 2048, 00:15:12.599 "data_size": 63488 00:15:12.599 }, 00:15:12.599 { 00:15:12.599 "name": "BaseBdev4", 00:15:12.599 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:12.599 "is_configured": true, 00:15:12.599 "data_offset": 2048, 00:15:12.599 "data_size": 63488 00:15:12.599 } 00:15:12.599 ] 00:15:12.599 }' 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.599 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 [2024-11-20 07:11:55.291448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.170 BaseBdev1 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 [ 00:15:13.170 { 00:15:13.170 "name": "BaseBdev1", 00:15:13.170 "aliases": [ 00:15:13.170 "2ffa2ff5-51f3-479b-a668-ab84bf029bba" 00:15:13.170 ], 00:15:13.170 "product_name": "Malloc disk", 00:15:13.170 "block_size": 512, 00:15:13.170 "num_blocks": 65536, 00:15:13.170 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:13.170 "assigned_rate_limits": { 00:15:13.170 "rw_ios_per_sec": 0, 00:15:13.170 "rw_mbytes_per_sec": 0, 00:15:13.170 "r_mbytes_per_sec": 0, 00:15:13.170 "w_mbytes_per_sec": 0 00:15:13.170 }, 00:15:13.170 "claimed": true, 00:15:13.170 "claim_type": "exclusive_write", 00:15:13.170 "zoned": false, 00:15:13.170 "supported_io_types": { 00:15:13.170 "read": true, 00:15:13.170 "write": true, 00:15:13.170 "unmap": true, 00:15:13.170 "flush": true, 00:15:13.170 "reset": true, 00:15:13.170 "nvme_admin": false, 00:15:13.170 "nvme_io": false, 00:15:13.170 "nvme_io_md": false, 00:15:13.170 "write_zeroes": true, 00:15:13.170 "zcopy": true, 00:15:13.170 "get_zone_info": false, 00:15:13.170 "zone_management": false, 00:15:13.170 "zone_append": false, 00:15:13.170 "compare": false, 00:15:13.170 "compare_and_write": false, 00:15:13.170 "abort": true, 00:15:13.170 "seek_hole": false, 00:15:13.170 "seek_data": false, 00:15:13.170 "copy": true, 00:15:13.170 "nvme_iov_md": false 00:15:13.170 }, 00:15:13.170 "memory_domains": [ 00:15:13.170 { 00:15:13.170 "dma_device_id": "system", 00:15:13.170 "dma_device_type": 1 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.170 "dma_device_type": 2 00:15:13.170 } 00:15:13.170 ], 00:15:13.170 "driver_specific": {} 00:15:13.170 } 00:15:13.170 ] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.170 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.170 "name": "Existed_Raid", 00:15:13.170 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:13.170 "strip_size_kb": 0, 00:15:13.170 "state": "configuring", 00:15:13.170 "raid_level": "raid1", 00:15:13.170 "superblock": true, 00:15:13.170 "num_base_bdevs": 4, 00:15:13.170 "num_base_bdevs_discovered": 3, 00:15:13.170 "num_base_bdevs_operational": 4, 00:15:13.170 "base_bdevs_list": [ 00:15:13.170 { 00:15:13.170 "name": "BaseBdev1", 00:15:13.170 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:13.170 "is_configured": true, 00:15:13.170 "data_offset": 2048, 00:15:13.170 "data_size": 63488 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "name": null, 00:15:13.170 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:13.170 "is_configured": false, 00:15:13.170 "data_offset": 0, 00:15:13.170 "data_size": 63488 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "name": "BaseBdev3", 00:15:13.170 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:13.170 "is_configured": true, 00:15:13.170 "data_offset": 2048, 00:15:13.170 "data_size": 63488 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "name": "BaseBdev4", 00:15:13.170 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:13.170 "is_configured": true, 00:15:13.171 "data_offset": 2048, 00:15:13.171 "data_size": 63488 00:15:13.171 } 00:15:13.171 ] 00:15:13.171 }' 00:15:13.171 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.171 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.739 [2024-11-20 07:11:55.830648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.739 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.740 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.740 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.740 "name": "Existed_Raid", 00:15:13.740 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:13.740 "strip_size_kb": 0, 00:15:13.740 "state": "configuring", 00:15:13.740 "raid_level": "raid1", 00:15:13.740 "superblock": true, 00:15:13.740 "num_base_bdevs": 4, 00:15:13.740 "num_base_bdevs_discovered": 2, 00:15:13.740 "num_base_bdevs_operational": 4, 00:15:13.740 "base_bdevs_list": [ 00:15:13.740 { 00:15:13.740 "name": "BaseBdev1", 00:15:13.740 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:13.740 "is_configured": true, 00:15:13.740 "data_offset": 2048, 00:15:13.740 "data_size": 63488 00:15:13.740 }, 00:15:13.740 { 00:15:13.740 "name": null, 00:15:13.740 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:13.740 "is_configured": false, 00:15:13.740 "data_offset": 0, 00:15:13.740 "data_size": 63488 00:15:13.740 }, 00:15:13.740 { 00:15:13.740 "name": null, 00:15:13.740 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:13.740 "is_configured": false, 00:15:13.740 "data_offset": 0, 00:15:13.740 "data_size": 63488 00:15:13.740 }, 00:15:13.740 { 00:15:13.740 "name": "BaseBdev4", 00:15:13.740 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:13.740 "is_configured": true, 00:15:13.740 "data_offset": 2048, 00:15:13.740 "data_size": 63488 00:15:13.740 } 00:15:13.740 ] 00:15:13.740 }' 00:15:13.740 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.740 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.310 [2024-11-20 07:11:56.357801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.310 "name": "Existed_Raid", 00:15:14.310 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:14.310 "strip_size_kb": 0, 00:15:14.310 "state": "configuring", 00:15:14.310 "raid_level": "raid1", 00:15:14.310 "superblock": true, 00:15:14.310 "num_base_bdevs": 4, 00:15:14.310 "num_base_bdevs_discovered": 3, 00:15:14.310 "num_base_bdevs_operational": 4, 00:15:14.310 "base_bdevs_list": [ 00:15:14.310 { 00:15:14.310 "name": "BaseBdev1", 00:15:14.310 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:14.310 "is_configured": true, 00:15:14.310 "data_offset": 2048, 00:15:14.310 "data_size": 63488 00:15:14.310 }, 00:15:14.310 { 00:15:14.310 "name": null, 00:15:14.310 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:14.310 "is_configured": false, 00:15:14.310 "data_offset": 0, 00:15:14.310 "data_size": 63488 00:15:14.310 }, 00:15:14.310 { 00:15:14.310 "name": "BaseBdev3", 00:15:14.310 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:14.310 "is_configured": true, 00:15:14.310 "data_offset": 2048, 00:15:14.310 "data_size": 63488 00:15:14.310 }, 00:15:14.310 { 00:15:14.310 "name": "BaseBdev4", 00:15:14.310 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:14.310 "is_configured": true, 00:15:14.310 "data_offset": 2048, 00:15:14.310 "data_size": 63488 00:15:14.310 } 00:15:14.310 ] 00:15:14.310 }' 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.310 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.591 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.850 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.850 [2024-11-20 07:11:56.909294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.850 "name": "Existed_Raid", 00:15:14.850 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:14.850 "strip_size_kb": 0, 00:15:14.850 "state": "configuring", 00:15:14.850 "raid_level": "raid1", 00:15:14.850 "superblock": true, 00:15:14.850 "num_base_bdevs": 4, 00:15:14.850 "num_base_bdevs_discovered": 2, 00:15:14.850 "num_base_bdevs_operational": 4, 00:15:14.850 "base_bdevs_list": [ 00:15:14.850 { 00:15:14.850 "name": null, 00:15:14.850 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:14.850 "is_configured": false, 00:15:14.850 "data_offset": 0, 00:15:14.850 "data_size": 63488 00:15:14.850 }, 00:15:14.850 { 00:15:14.850 "name": null, 00:15:14.850 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:14.850 "is_configured": false, 00:15:14.850 "data_offset": 0, 00:15:14.850 "data_size": 63488 00:15:14.850 }, 00:15:14.850 { 00:15:14.850 "name": "BaseBdev3", 00:15:14.850 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:14.850 "is_configured": true, 00:15:14.850 "data_offset": 2048, 00:15:14.850 "data_size": 63488 00:15:14.850 }, 00:15:14.850 { 00:15:14.850 "name": "BaseBdev4", 00:15:14.850 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:14.850 "is_configured": true, 00:15:14.850 "data_offset": 2048, 00:15:14.850 "data_size": 63488 00:15:14.850 } 00:15:14.850 ] 00:15:14.850 }' 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.850 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 [2024-11-20 07:11:57.513248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.416 "name": "Existed_Raid", 00:15:15.416 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:15.416 "strip_size_kb": 0, 00:15:15.416 "state": "configuring", 00:15:15.416 "raid_level": "raid1", 00:15:15.417 "superblock": true, 00:15:15.417 "num_base_bdevs": 4, 00:15:15.417 "num_base_bdevs_discovered": 3, 00:15:15.417 "num_base_bdevs_operational": 4, 00:15:15.417 "base_bdevs_list": [ 00:15:15.417 { 00:15:15.417 "name": null, 00:15:15.417 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:15.417 "is_configured": false, 00:15:15.417 "data_offset": 0, 00:15:15.417 "data_size": 63488 00:15:15.417 }, 00:15:15.417 { 00:15:15.417 "name": "BaseBdev2", 00:15:15.417 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:15.417 "is_configured": true, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 }, 00:15:15.417 { 00:15:15.417 "name": "BaseBdev3", 00:15:15.417 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:15.417 "is_configured": true, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 }, 00:15:15.417 { 00:15:15.417 "name": "BaseBdev4", 00:15:15.417 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:15.417 "is_configured": true, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 } 00:15:15.417 ] 00:15:15.417 }' 00:15:15.417 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.417 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.984 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.984 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2ffa2ff5-51f3-479b-a668-ab84bf029bba 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 [2024-11-20 07:11:58.124562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:15.984 [2024-11-20 07:11:58.124822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:15.984 [2024-11-20 07:11:58.124845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.984 NewBaseBdev 00:15:15.984 [2024-11-20 07:11:58.125169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:15.984 [2024-11-20 07:11:58.125375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:15.984 [2024-11-20 07:11:58.125389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:15.984 [2024-11-20 07:11:58.125559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.984 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.984 [ 00:15:15.984 { 00:15:15.984 "name": "NewBaseBdev", 00:15:15.984 "aliases": [ 00:15:15.984 "2ffa2ff5-51f3-479b-a668-ab84bf029bba" 00:15:15.984 ], 00:15:15.984 "product_name": "Malloc disk", 00:15:15.984 "block_size": 512, 00:15:15.984 "num_blocks": 65536, 00:15:15.984 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:15.984 "assigned_rate_limits": { 00:15:15.984 "rw_ios_per_sec": 0, 00:15:15.984 "rw_mbytes_per_sec": 0, 00:15:15.984 "r_mbytes_per_sec": 0, 00:15:15.984 "w_mbytes_per_sec": 0 00:15:15.984 }, 00:15:15.984 "claimed": true, 00:15:15.984 "claim_type": "exclusive_write", 00:15:15.984 "zoned": false, 00:15:15.984 "supported_io_types": { 00:15:15.984 "read": true, 00:15:15.984 "write": true, 00:15:15.984 "unmap": true, 00:15:15.984 "flush": true, 00:15:15.984 "reset": true, 00:15:15.984 "nvme_admin": false, 00:15:15.984 "nvme_io": false, 00:15:15.984 "nvme_io_md": false, 00:15:15.984 "write_zeroes": true, 00:15:15.984 "zcopy": true, 00:15:15.984 "get_zone_info": false, 00:15:15.984 "zone_management": false, 00:15:15.984 "zone_append": false, 00:15:15.984 "compare": false, 00:15:15.984 "compare_and_write": false, 00:15:15.984 "abort": true, 00:15:15.984 "seek_hole": false, 00:15:15.984 "seek_data": false, 00:15:15.984 "copy": true, 00:15:15.984 "nvme_iov_md": false 00:15:15.984 }, 00:15:15.984 "memory_domains": [ 00:15:15.984 { 00:15:15.984 "dma_device_id": "system", 00:15:15.984 "dma_device_type": 1 00:15:15.985 }, 00:15:15.985 { 00:15:15.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.985 "dma_device_type": 2 00:15:15.985 } 00:15:15.985 ], 00:15:15.985 "driver_specific": {} 00:15:15.985 } 00:15:15.985 ] 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.985 "name": "Existed_Raid", 00:15:15.985 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:15.985 "strip_size_kb": 0, 00:15:15.985 "state": "online", 00:15:15.985 "raid_level": "raid1", 00:15:15.985 "superblock": true, 00:15:15.985 "num_base_bdevs": 4, 00:15:15.985 "num_base_bdevs_discovered": 4, 00:15:15.985 "num_base_bdevs_operational": 4, 00:15:15.985 "base_bdevs_list": [ 00:15:15.985 { 00:15:15.985 "name": "NewBaseBdev", 00:15:15.985 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:15.985 "is_configured": true, 00:15:15.985 "data_offset": 2048, 00:15:15.985 "data_size": 63488 00:15:15.985 }, 00:15:15.985 { 00:15:15.985 "name": "BaseBdev2", 00:15:15.985 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:15.985 "is_configured": true, 00:15:15.985 "data_offset": 2048, 00:15:15.985 "data_size": 63488 00:15:15.985 }, 00:15:15.985 { 00:15:15.985 "name": "BaseBdev3", 00:15:15.985 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:15.985 "is_configured": true, 00:15:15.985 "data_offset": 2048, 00:15:15.985 "data_size": 63488 00:15:15.985 }, 00:15:15.985 { 00:15:15.985 "name": "BaseBdev4", 00:15:15.985 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:15.985 "is_configured": true, 00:15:15.985 "data_offset": 2048, 00:15:15.985 "data_size": 63488 00:15:15.985 } 00:15:15.985 ] 00:15:15.985 }' 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.985 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.553 [2024-11-20 07:11:58.632138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.553 "name": "Existed_Raid", 00:15:16.553 "aliases": [ 00:15:16.553 "9b1b6ab9-4170-43de-8948-1cd60e920880" 00:15:16.553 ], 00:15:16.553 "product_name": "Raid Volume", 00:15:16.553 "block_size": 512, 00:15:16.553 "num_blocks": 63488, 00:15:16.553 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:16.553 "assigned_rate_limits": { 00:15:16.553 "rw_ios_per_sec": 0, 00:15:16.553 "rw_mbytes_per_sec": 0, 00:15:16.553 "r_mbytes_per_sec": 0, 00:15:16.553 "w_mbytes_per_sec": 0 00:15:16.553 }, 00:15:16.553 "claimed": false, 00:15:16.553 "zoned": false, 00:15:16.553 "supported_io_types": { 00:15:16.553 "read": true, 00:15:16.553 "write": true, 00:15:16.553 "unmap": false, 00:15:16.553 "flush": false, 00:15:16.553 "reset": true, 00:15:16.553 "nvme_admin": false, 00:15:16.553 "nvme_io": false, 00:15:16.553 "nvme_io_md": false, 00:15:16.553 "write_zeroes": true, 00:15:16.553 "zcopy": false, 00:15:16.553 "get_zone_info": false, 00:15:16.553 "zone_management": false, 00:15:16.553 "zone_append": false, 00:15:16.553 "compare": false, 00:15:16.553 "compare_and_write": false, 00:15:16.553 "abort": false, 00:15:16.553 "seek_hole": false, 00:15:16.553 "seek_data": false, 00:15:16.553 "copy": false, 00:15:16.553 "nvme_iov_md": false 00:15:16.553 }, 00:15:16.553 "memory_domains": [ 00:15:16.553 { 00:15:16.553 "dma_device_id": "system", 00:15:16.553 "dma_device_type": 1 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.553 "dma_device_type": 2 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "system", 00:15:16.553 "dma_device_type": 1 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.553 "dma_device_type": 2 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "system", 00:15:16.553 "dma_device_type": 1 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.553 "dma_device_type": 2 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "system", 00:15:16.553 "dma_device_type": 1 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.553 "dma_device_type": 2 00:15:16.553 } 00:15:16.553 ], 00:15:16.553 "driver_specific": { 00:15:16.553 "raid": { 00:15:16.553 "uuid": "9b1b6ab9-4170-43de-8948-1cd60e920880", 00:15:16.553 "strip_size_kb": 0, 00:15:16.553 "state": "online", 00:15:16.553 "raid_level": "raid1", 00:15:16.553 "superblock": true, 00:15:16.553 "num_base_bdevs": 4, 00:15:16.553 "num_base_bdevs_discovered": 4, 00:15:16.553 "num_base_bdevs_operational": 4, 00:15:16.553 "base_bdevs_list": [ 00:15:16.553 { 00:15:16.553 "name": "NewBaseBdev", 00:15:16.553 "uuid": "2ffa2ff5-51f3-479b-a668-ab84bf029bba", 00:15:16.553 "is_configured": true, 00:15:16.553 "data_offset": 2048, 00:15:16.553 "data_size": 63488 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "name": "BaseBdev2", 00:15:16.553 "uuid": "b0318cfa-9cd4-496f-96e7-78c2cab5ad98", 00:15:16.553 "is_configured": true, 00:15:16.553 "data_offset": 2048, 00:15:16.553 "data_size": 63488 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "name": "BaseBdev3", 00:15:16.553 "uuid": "7a4d8ea7-ad75-4fbd-87ed-62b88fc1a8b2", 00:15:16.553 "is_configured": true, 00:15:16.553 "data_offset": 2048, 00:15:16.553 "data_size": 63488 00:15:16.553 }, 00:15:16.553 { 00:15:16.553 "name": "BaseBdev4", 00:15:16.553 "uuid": "57488d1a-f4c9-4776-a8f8-563c60264b0d", 00:15:16.553 "is_configured": true, 00:15:16.553 "data_offset": 2048, 00:15:16.553 "data_size": 63488 00:15:16.553 } 00:15:16.553 ] 00:15:16.553 } 00:15:16.553 } 00:15:16.553 }' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:16.553 BaseBdev2 00:15:16.553 BaseBdev3 00:15:16.553 BaseBdev4' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.553 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 [2024-11-20 07:11:58.927302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.812 [2024-11-20 07:11:58.927354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.812 [2024-11-20 07:11:58.927444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.812 [2024-11-20 07:11:58.927771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.812 [2024-11-20 07:11:58.927795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74205 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74205 ']' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74205 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74205 00:15:16.812 killing process with pid 74205 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74205' 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74205 00:15:16.812 [2024-11-20 07:11:58.969756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.812 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74205 00:15:17.379 [2024-11-20 07:11:59.407490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.755 ************************************ 00:15:18.756 07:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:18.756 00:15:18.756 real 0m12.196s 00:15:18.756 user 0m19.097s 00:15:18.756 sys 0m2.368s 00:15:18.756 07:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.756 07:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.756 END TEST raid_state_function_test_sb 00:15:18.756 ************************************ 00:15:18.756 07:12:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:18.756 07:12:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:18.756 07:12:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.756 07:12:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.756 ************************************ 00:15:18.756 START TEST raid_superblock_test 00:15:18.756 ************************************ 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74884 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74884 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74884 ']' 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.756 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.756 [2024-11-20 07:12:00.812050] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:18.756 [2024-11-20 07:12:00.812202] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74884 ] 00:15:18.756 [2024-11-20 07:12:00.998085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.014 [2024-11-20 07:12:01.143166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.273 [2024-11-20 07:12:01.404912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.273 [2024-11-20 07:12:01.404965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.532 malloc1 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.532 [2024-11-20 07:12:01.723433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:19.532 [2024-11-20 07:12:01.723515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.532 [2024-11-20 07:12:01.723546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:19.532 [2024-11-20 07:12:01.723558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.532 [2024-11-20 07:12:01.726380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.532 [2024-11-20 07:12:01.726421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:19.532 pt1 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.532 malloc2 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.532 [2024-11-20 07:12:01.786928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.532 [2024-11-20 07:12:01.787004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.532 [2024-11-20 07:12:01.787027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.532 [2024-11-20 07:12:01.787036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.532 [2024-11-20 07:12:01.789464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.532 [2024-11-20 07:12:01.789502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.532 pt2 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.532 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.791 malloc3 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.791 [2024-11-20 07:12:01.869186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:19.791 [2024-11-20 07:12:01.869265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.791 [2024-11-20 07:12:01.869292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.791 [2024-11-20 07:12:01.869322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.791 [2024-11-20 07:12:01.872146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.791 [2024-11-20 07:12:01.872193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:19.791 pt3 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.791 malloc4 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.791 [2024-11-20 07:12:01.938328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:19.791 [2024-11-20 07:12:01.938437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.791 [2024-11-20 07:12:01.938466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:19.791 [2024-11-20 07:12:01.938478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.791 [2024-11-20 07:12:01.941379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.791 [2024-11-20 07:12:01.941430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:19.791 pt4 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.791 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.791 [2024-11-20 07:12:01.950364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.791 [2024-11-20 07:12:01.952856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.791 [2024-11-20 07:12:01.952941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:19.792 [2024-11-20 07:12:01.953008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:19.792 [2024-11-20 07:12:01.953258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:19.792 [2024-11-20 07:12:01.953288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.792 [2024-11-20 07:12:01.953709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:19.792 [2024-11-20 07:12:01.953980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:19.792 [2024-11-20 07:12:01.954007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:19.792 [2024-11-20 07:12:01.954306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.792 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.792 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.792 "name": "raid_bdev1", 00:15:19.792 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:19.792 "strip_size_kb": 0, 00:15:19.792 "state": "online", 00:15:19.792 "raid_level": "raid1", 00:15:19.792 "superblock": true, 00:15:19.792 "num_base_bdevs": 4, 00:15:19.792 "num_base_bdevs_discovered": 4, 00:15:19.792 "num_base_bdevs_operational": 4, 00:15:19.792 "base_bdevs_list": [ 00:15:19.792 { 00:15:19.792 "name": "pt1", 00:15:19.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.792 "is_configured": true, 00:15:19.792 "data_offset": 2048, 00:15:19.792 "data_size": 63488 00:15:19.792 }, 00:15:19.792 { 00:15:19.792 "name": "pt2", 00:15:19.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.792 "is_configured": true, 00:15:19.792 "data_offset": 2048, 00:15:19.792 "data_size": 63488 00:15:19.792 }, 00:15:19.792 { 00:15:19.792 "name": "pt3", 00:15:19.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.792 "is_configured": true, 00:15:19.792 "data_offset": 2048, 00:15:19.792 "data_size": 63488 00:15:19.792 }, 00:15:19.792 { 00:15:19.792 "name": "pt4", 00:15:19.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.792 "is_configured": true, 00:15:19.792 "data_offset": 2048, 00:15:19.792 "data_size": 63488 00:15:19.792 } 00:15:19.792 ] 00:15:19.792 }' 00:15:19.792 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.792 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.398 [2024-11-20 07:12:02.426046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.398 "name": "raid_bdev1", 00:15:20.398 "aliases": [ 00:15:20.398 "6b961311-c464-44ac-a322-2028b7e8a926" 00:15:20.398 ], 00:15:20.398 "product_name": "Raid Volume", 00:15:20.398 "block_size": 512, 00:15:20.398 "num_blocks": 63488, 00:15:20.398 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:20.398 "assigned_rate_limits": { 00:15:20.398 "rw_ios_per_sec": 0, 00:15:20.398 "rw_mbytes_per_sec": 0, 00:15:20.398 "r_mbytes_per_sec": 0, 00:15:20.398 "w_mbytes_per_sec": 0 00:15:20.398 }, 00:15:20.398 "claimed": false, 00:15:20.398 "zoned": false, 00:15:20.398 "supported_io_types": { 00:15:20.398 "read": true, 00:15:20.398 "write": true, 00:15:20.398 "unmap": false, 00:15:20.398 "flush": false, 00:15:20.398 "reset": true, 00:15:20.398 "nvme_admin": false, 00:15:20.398 "nvme_io": false, 00:15:20.398 "nvme_io_md": false, 00:15:20.398 "write_zeroes": true, 00:15:20.398 "zcopy": false, 00:15:20.398 "get_zone_info": false, 00:15:20.398 "zone_management": false, 00:15:20.398 "zone_append": false, 00:15:20.398 "compare": false, 00:15:20.398 "compare_and_write": false, 00:15:20.398 "abort": false, 00:15:20.398 "seek_hole": false, 00:15:20.398 "seek_data": false, 00:15:20.398 "copy": false, 00:15:20.398 "nvme_iov_md": false 00:15:20.398 }, 00:15:20.398 "memory_domains": [ 00:15:20.398 { 00:15:20.398 "dma_device_id": "system", 00:15:20.398 "dma_device_type": 1 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.398 "dma_device_type": 2 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "system", 00:15:20.398 "dma_device_type": 1 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.398 "dma_device_type": 2 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "system", 00:15:20.398 "dma_device_type": 1 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.398 "dma_device_type": 2 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "system", 00:15:20.398 "dma_device_type": 1 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.398 "dma_device_type": 2 00:15:20.398 } 00:15:20.398 ], 00:15:20.398 "driver_specific": { 00:15:20.398 "raid": { 00:15:20.398 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:20.398 "strip_size_kb": 0, 00:15:20.398 "state": "online", 00:15:20.398 "raid_level": "raid1", 00:15:20.398 "superblock": true, 00:15:20.398 "num_base_bdevs": 4, 00:15:20.398 "num_base_bdevs_discovered": 4, 00:15:20.398 "num_base_bdevs_operational": 4, 00:15:20.398 "base_bdevs_list": [ 00:15:20.398 { 00:15:20.398 "name": "pt1", 00:15:20.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.398 "is_configured": true, 00:15:20.398 "data_offset": 2048, 00:15:20.398 "data_size": 63488 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "name": "pt2", 00:15:20.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.398 "is_configured": true, 00:15:20.398 "data_offset": 2048, 00:15:20.398 "data_size": 63488 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "name": "pt3", 00:15:20.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.398 "is_configured": true, 00:15:20.398 "data_offset": 2048, 00:15:20.398 "data_size": 63488 00:15:20.398 }, 00:15:20.398 { 00:15:20.398 "name": "pt4", 00:15:20.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.398 "is_configured": true, 00:15:20.398 "data_offset": 2048, 00:15:20.398 "data_size": 63488 00:15:20.398 } 00:15:20.398 ] 00:15:20.398 } 00:15:20.398 } 00:15:20.398 }' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:20.398 pt2 00:15:20.398 pt3 00:15:20.398 pt4' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.398 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 [2024-11-20 07:12:02.729316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6b961311-c464-44ac-a322-2028b7e8a926 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6b961311-c464-44ac-a322-2028b7e8a926 ']' 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 [2024-11-20 07:12:02.768957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.658 [2024-11-20 07:12:02.768994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.658 [2024-11-20 07:12:02.769089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.658 [2024-11-20 07:12:02.769198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.658 [2024-11-20 07:12:02.769219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.658 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.659 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 [2024-11-20 07:12:02.936735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:20.918 [2024-11-20 07:12:02.939202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:20.918 [2024-11-20 07:12:02.939275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:20.918 [2024-11-20 07:12:02.939311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:20.918 [2024-11-20 07:12:02.939397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:20.918 [2024-11-20 07:12:02.939471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:20.918 [2024-11-20 07:12:02.939493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:20.918 [2024-11-20 07:12:02.939513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:20.918 [2024-11-20 07:12:02.939527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.918 [2024-11-20 07:12:02.939539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:20.918 request: 00:15:20.918 { 00:15:20.918 "name": "raid_bdev1", 00:15:20.918 "raid_level": "raid1", 00:15:20.918 "base_bdevs": [ 00:15:20.918 "malloc1", 00:15:20.918 "malloc2", 00:15:20.918 "malloc3", 00:15:20.918 "malloc4" 00:15:20.918 ], 00:15:20.918 "superblock": false, 00:15:20.918 "method": "bdev_raid_create", 00:15:20.918 "req_id": 1 00:15:20.918 } 00:15:20.918 Got JSON-RPC error response 00:15:20.918 response: 00:15:20.918 { 00:15:20.918 "code": -17, 00:15:20.918 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:20.918 } 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 [2024-11-20 07:12:03.004586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.918 [2024-11-20 07:12:03.004689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.918 [2024-11-20 07:12:03.004711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:20.918 [2024-11-20 07:12:03.004724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.918 [2024-11-20 07:12:03.007313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.918 [2024-11-20 07:12:03.007368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.918 [2024-11-20 07:12:03.007494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.918 [2024-11-20 07:12:03.007568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.918 pt1 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.918 "name": "raid_bdev1", 00:15:20.918 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:20.918 "strip_size_kb": 0, 00:15:20.918 "state": "configuring", 00:15:20.918 "raid_level": "raid1", 00:15:20.918 "superblock": true, 00:15:20.918 "num_base_bdevs": 4, 00:15:20.918 "num_base_bdevs_discovered": 1, 00:15:20.918 "num_base_bdevs_operational": 4, 00:15:20.918 "base_bdevs_list": [ 00:15:20.918 { 00:15:20.918 "name": "pt1", 00:15:20.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.918 "is_configured": true, 00:15:20.918 "data_offset": 2048, 00:15:20.918 "data_size": 63488 00:15:20.918 }, 00:15:20.918 { 00:15:20.918 "name": null, 00:15:20.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.918 "is_configured": false, 00:15:20.918 "data_offset": 2048, 00:15:20.918 "data_size": 63488 00:15:20.918 }, 00:15:20.918 { 00:15:20.918 "name": null, 00:15:20.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.918 "is_configured": false, 00:15:20.919 "data_offset": 2048, 00:15:20.919 "data_size": 63488 00:15:20.919 }, 00:15:20.919 { 00:15:20.919 "name": null, 00:15:20.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.919 "is_configured": false, 00:15:20.919 "data_offset": 2048, 00:15:20.919 "data_size": 63488 00:15:20.919 } 00:15:20.919 ] 00:15:20.919 }' 00:15:20.919 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.919 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.178 [2024-11-20 07:12:03.427885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.178 [2024-11-20 07:12:03.427976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.178 [2024-11-20 07:12:03.428001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:21.178 [2024-11-20 07:12:03.428014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.178 [2024-11-20 07:12:03.428578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.178 [2024-11-20 07:12:03.428608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.178 [2024-11-20 07:12:03.428730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:21.178 [2024-11-20 07:12:03.428790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.178 pt2 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.178 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.178 [2024-11-20 07:12:03.439834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.437 "name": "raid_bdev1", 00:15:21.437 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:21.437 "strip_size_kb": 0, 00:15:21.437 "state": "configuring", 00:15:21.437 "raid_level": "raid1", 00:15:21.437 "superblock": true, 00:15:21.437 "num_base_bdevs": 4, 00:15:21.437 "num_base_bdevs_discovered": 1, 00:15:21.437 "num_base_bdevs_operational": 4, 00:15:21.437 "base_bdevs_list": [ 00:15:21.437 { 00:15:21.437 "name": "pt1", 00:15:21.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.437 "is_configured": true, 00:15:21.437 "data_offset": 2048, 00:15:21.437 "data_size": 63488 00:15:21.437 }, 00:15:21.437 { 00:15:21.437 "name": null, 00:15:21.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.437 "is_configured": false, 00:15:21.437 "data_offset": 0, 00:15:21.437 "data_size": 63488 00:15:21.437 }, 00:15:21.437 { 00:15:21.437 "name": null, 00:15:21.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.437 "is_configured": false, 00:15:21.437 "data_offset": 2048, 00:15:21.437 "data_size": 63488 00:15:21.437 }, 00:15:21.437 { 00:15:21.437 "name": null, 00:15:21.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:21.437 "is_configured": false, 00:15:21.437 "data_offset": 2048, 00:15:21.437 "data_size": 63488 00:15:21.437 } 00:15:21.437 ] 00:15:21.437 }' 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.437 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 [2024-11-20 07:12:03.903136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.698 [2024-11-20 07:12:03.903234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.698 [2024-11-20 07:12:03.903266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:21.698 [2024-11-20 07:12:03.903278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.698 [2024-11-20 07:12:03.903876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.698 [2024-11-20 07:12:03.903903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.698 [2024-11-20 07:12:03.904018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:21.698 [2024-11-20 07:12:03.904057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.698 pt2 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 [2024-11-20 07:12:03.915114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:21.698 [2024-11-20 07:12:03.915208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.698 [2024-11-20 07:12:03.915243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:21.698 [2024-11-20 07:12:03.915257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.698 [2024-11-20 07:12:03.915917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.698 [2024-11-20 07:12:03.915970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:21.698 [2024-11-20 07:12:03.916099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:21.698 [2024-11-20 07:12:03.916145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:21.698 pt3 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 [2024-11-20 07:12:03.927000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:21.698 [2024-11-20 07:12:03.927055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.698 [2024-11-20 07:12:03.927092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:21.698 [2024-11-20 07:12:03.927102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.698 [2024-11-20 07:12:03.927602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.698 [2024-11-20 07:12:03.927637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:21.698 [2024-11-20 07:12:03.927720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:21.698 [2024-11-20 07:12:03.927748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:21.698 [2024-11-20 07:12:03.927917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:21.698 [2024-11-20 07:12:03.927935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.698 [2024-11-20 07:12:03.928231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:21.698 [2024-11-20 07:12:03.928434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:21.698 [2024-11-20 07:12:03.928458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:21.698 [2024-11-20 07:12:03.928644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.698 pt4 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.698 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.957 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.957 "name": "raid_bdev1", 00:15:21.957 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:21.957 "strip_size_kb": 0, 00:15:21.957 "state": "online", 00:15:21.957 "raid_level": "raid1", 00:15:21.957 "superblock": true, 00:15:21.957 "num_base_bdevs": 4, 00:15:21.957 "num_base_bdevs_discovered": 4, 00:15:21.957 "num_base_bdevs_operational": 4, 00:15:21.957 "base_bdevs_list": [ 00:15:21.957 { 00:15:21.957 "name": "pt1", 00:15:21.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.957 "is_configured": true, 00:15:21.957 "data_offset": 2048, 00:15:21.957 "data_size": 63488 00:15:21.958 }, 00:15:21.958 { 00:15:21.958 "name": "pt2", 00:15:21.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.958 "is_configured": true, 00:15:21.958 "data_offset": 2048, 00:15:21.958 "data_size": 63488 00:15:21.958 }, 00:15:21.958 { 00:15:21.958 "name": "pt3", 00:15:21.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.958 "is_configured": true, 00:15:21.958 "data_offset": 2048, 00:15:21.958 "data_size": 63488 00:15:21.958 }, 00:15:21.958 { 00:15:21.958 "name": "pt4", 00:15:21.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:21.958 "is_configured": true, 00:15:21.958 "data_offset": 2048, 00:15:21.958 "data_size": 63488 00:15:21.958 } 00:15:21.958 ] 00:15:21.958 }' 00:15:21.958 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.958 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.217 [2024-11-20 07:12:04.434667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.217 "name": "raid_bdev1", 00:15:22.217 "aliases": [ 00:15:22.217 "6b961311-c464-44ac-a322-2028b7e8a926" 00:15:22.217 ], 00:15:22.217 "product_name": "Raid Volume", 00:15:22.217 "block_size": 512, 00:15:22.217 "num_blocks": 63488, 00:15:22.217 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:22.217 "assigned_rate_limits": { 00:15:22.217 "rw_ios_per_sec": 0, 00:15:22.217 "rw_mbytes_per_sec": 0, 00:15:22.217 "r_mbytes_per_sec": 0, 00:15:22.217 "w_mbytes_per_sec": 0 00:15:22.217 }, 00:15:22.217 "claimed": false, 00:15:22.217 "zoned": false, 00:15:22.217 "supported_io_types": { 00:15:22.217 "read": true, 00:15:22.217 "write": true, 00:15:22.217 "unmap": false, 00:15:22.217 "flush": false, 00:15:22.217 "reset": true, 00:15:22.217 "nvme_admin": false, 00:15:22.217 "nvme_io": false, 00:15:22.217 "nvme_io_md": false, 00:15:22.217 "write_zeroes": true, 00:15:22.217 "zcopy": false, 00:15:22.217 "get_zone_info": false, 00:15:22.217 "zone_management": false, 00:15:22.217 "zone_append": false, 00:15:22.217 "compare": false, 00:15:22.217 "compare_and_write": false, 00:15:22.217 "abort": false, 00:15:22.217 "seek_hole": false, 00:15:22.217 "seek_data": false, 00:15:22.217 "copy": false, 00:15:22.217 "nvme_iov_md": false 00:15:22.217 }, 00:15:22.217 "memory_domains": [ 00:15:22.217 { 00:15:22.217 "dma_device_id": "system", 00:15:22.217 "dma_device_type": 1 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.217 "dma_device_type": 2 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "system", 00:15:22.217 "dma_device_type": 1 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.217 "dma_device_type": 2 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "system", 00:15:22.217 "dma_device_type": 1 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.217 "dma_device_type": 2 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "system", 00:15:22.217 "dma_device_type": 1 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.217 "dma_device_type": 2 00:15:22.217 } 00:15:22.217 ], 00:15:22.217 "driver_specific": { 00:15:22.217 "raid": { 00:15:22.217 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:22.217 "strip_size_kb": 0, 00:15:22.217 "state": "online", 00:15:22.217 "raid_level": "raid1", 00:15:22.217 "superblock": true, 00:15:22.217 "num_base_bdevs": 4, 00:15:22.217 "num_base_bdevs_discovered": 4, 00:15:22.217 "num_base_bdevs_operational": 4, 00:15:22.217 "base_bdevs_list": [ 00:15:22.217 { 00:15:22.217 "name": "pt1", 00:15:22.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.217 "is_configured": true, 00:15:22.217 "data_offset": 2048, 00:15:22.217 "data_size": 63488 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "name": "pt2", 00:15:22.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.217 "is_configured": true, 00:15:22.217 "data_offset": 2048, 00:15:22.217 "data_size": 63488 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "name": "pt3", 00:15:22.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:22.217 "is_configured": true, 00:15:22.217 "data_offset": 2048, 00:15:22.217 "data_size": 63488 00:15:22.217 }, 00:15:22.217 { 00:15:22.217 "name": "pt4", 00:15:22.217 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:22.217 "is_configured": true, 00:15:22.217 "data_offset": 2048, 00:15:22.217 "data_size": 63488 00:15:22.217 } 00:15:22.217 ] 00:15:22.217 } 00:15:22.217 } 00:15:22.217 }' 00:15:22.217 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:22.478 pt2 00:15:22.478 pt3 00:15:22.478 pt4' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.478 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.739 [2024-11-20 07:12:04.782086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6b961311-c464-44ac-a322-2028b7e8a926 '!=' 6b961311-c464-44ac-a322-2028b7e8a926 ']' 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.739 [2024-11-20 07:12:04.829692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.739 "name": "raid_bdev1", 00:15:22.739 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:22.739 "strip_size_kb": 0, 00:15:22.739 "state": "online", 00:15:22.739 "raid_level": "raid1", 00:15:22.739 "superblock": true, 00:15:22.739 "num_base_bdevs": 4, 00:15:22.739 "num_base_bdevs_discovered": 3, 00:15:22.739 "num_base_bdevs_operational": 3, 00:15:22.739 "base_bdevs_list": [ 00:15:22.739 { 00:15:22.739 "name": null, 00:15:22.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.739 "is_configured": false, 00:15:22.739 "data_offset": 0, 00:15:22.739 "data_size": 63488 00:15:22.739 }, 00:15:22.739 { 00:15:22.739 "name": "pt2", 00:15:22.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.739 "is_configured": true, 00:15:22.739 "data_offset": 2048, 00:15:22.739 "data_size": 63488 00:15:22.739 }, 00:15:22.739 { 00:15:22.739 "name": "pt3", 00:15:22.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:22.739 "is_configured": true, 00:15:22.739 "data_offset": 2048, 00:15:22.739 "data_size": 63488 00:15:22.739 }, 00:15:22.739 { 00:15:22.739 "name": "pt4", 00:15:22.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:22.739 "is_configured": true, 00:15:22.739 "data_offset": 2048, 00:15:22.739 "data_size": 63488 00:15:22.739 } 00:15:22.739 ] 00:15:22.739 }' 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.739 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.000 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.000 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.000 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.000 [2024-11-20 07:12:05.261192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.000 [2024-11-20 07:12:05.261329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.000 [2024-11-20 07:12:05.261476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.000 [2024-11-20 07:12:05.261608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.000 [2024-11-20 07:12:05.261677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 [2024-11-20 07:12:05.345150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.260 [2024-11-20 07:12:05.345220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.260 [2024-11-20 07:12:05.345243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:23.260 [2024-11-20 07:12:05.345255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.260 [2024-11-20 07:12:05.347807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.260 [2024-11-20 07:12:05.347852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.260 [2024-11-20 07:12:05.347941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:23.260 [2024-11-20 07:12:05.347987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.260 pt2 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.260 "name": "raid_bdev1", 00:15:23.260 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:23.260 "strip_size_kb": 0, 00:15:23.260 "state": "configuring", 00:15:23.260 "raid_level": "raid1", 00:15:23.260 "superblock": true, 00:15:23.260 "num_base_bdevs": 4, 00:15:23.260 "num_base_bdevs_discovered": 1, 00:15:23.260 "num_base_bdevs_operational": 3, 00:15:23.260 "base_bdevs_list": [ 00:15:23.260 { 00:15:23.260 "name": null, 00:15:23.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.260 "is_configured": false, 00:15:23.260 "data_offset": 2048, 00:15:23.260 "data_size": 63488 00:15:23.260 }, 00:15:23.260 { 00:15:23.260 "name": "pt2", 00:15:23.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.260 "is_configured": true, 00:15:23.260 "data_offset": 2048, 00:15:23.260 "data_size": 63488 00:15:23.260 }, 00:15:23.260 { 00:15:23.260 "name": null, 00:15:23.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.260 "is_configured": false, 00:15:23.260 "data_offset": 2048, 00:15:23.260 "data_size": 63488 00:15:23.260 }, 00:15:23.260 { 00:15:23.260 "name": null, 00:15:23.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:23.260 "is_configured": false, 00:15:23.260 "data_offset": 2048, 00:15:23.260 "data_size": 63488 00:15:23.260 } 00:15:23.260 ] 00:15:23.260 }' 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.260 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.832 [2024-11-20 07:12:05.812851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:23.832 [2024-11-20 07:12:05.813054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.832 [2024-11-20 07:12:05.813116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:23.832 [2024-11-20 07:12:05.813156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.832 [2024-11-20 07:12:05.813752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.832 [2024-11-20 07:12:05.813827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:23.832 [2024-11-20 07:12:05.813969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:23.832 [2024-11-20 07:12:05.814030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:23.832 pt3 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.832 "name": "raid_bdev1", 00:15:23.832 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:23.832 "strip_size_kb": 0, 00:15:23.832 "state": "configuring", 00:15:23.832 "raid_level": "raid1", 00:15:23.832 "superblock": true, 00:15:23.832 "num_base_bdevs": 4, 00:15:23.832 "num_base_bdevs_discovered": 2, 00:15:23.832 "num_base_bdevs_operational": 3, 00:15:23.832 "base_bdevs_list": [ 00:15:23.832 { 00:15:23.832 "name": null, 00:15:23.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.832 "is_configured": false, 00:15:23.832 "data_offset": 2048, 00:15:23.832 "data_size": 63488 00:15:23.832 }, 00:15:23.832 { 00:15:23.832 "name": "pt2", 00:15:23.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.832 "is_configured": true, 00:15:23.832 "data_offset": 2048, 00:15:23.832 "data_size": 63488 00:15:23.832 }, 00:15:23.832 { 00:15:23.832 "name": "pt3", 00:15:23.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.832 "is_configured": true, 00:15:23.832 "data_offset": 2048, 00:15:23.832 "data_size": 63488 00:15:23.832 }, 00:15:23.832 { 00:15:23.832 "name": null, 00:15:23.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:23.832 "is_configured": false, 00:15:23.832 "data_offset": 2048, 00:15:23.832 "data_size": 63488 00:15:23.832 } 00:15:23.832 ] 00:15:23.832 }' 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.832 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.093 [2024-11-20 07:12:06.308104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:24.093 [2024-11-20 07:12:06.308187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.093 [2024-11-20 07:12:06.308213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:24.093 [2024-11-20 07:12:06.308224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.093 [2024-11-20 07:12:06.308755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.093 [2024-11-20 07:12:06.308776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:24.093 [2024-11-20 07:12:06.308870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:24.093 [2024-11-20 07:12:06.308904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:24.093 [2024-11-20 07:12:06.309085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:24.093 [2024-11-20 07:12:06.309101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:24.093 [2024-11-20 07:12:06.309405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:24.093 [2024-11-20 07:12:06.309589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:24.093 [2024-11-20 07:12:06.309603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:24.093 [2024-11-20 07:12:06.309763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.093 pt4 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.093 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.353 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.353 "name": "raid_bdev1", 00:15:24.353 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:24.353 "strip_size_kb": 0, 00:15:24.353 "state": "online", 00:15:24.353 "raid_level": "raid1", 00:15:24.353 "superblock": true, 00:15:24.353 "num_base_bdevs": 4, 00:15:24.353 "num_base_bdevs_discovered": 3, 00:15:24.353 "num_base_bdevs_operational": 3, 00:15:24.353 "base_bdevs_list": [ 00:15:24.353 { 00:15:24.353 "name": null, 00:15:24.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.353 "is_configured": false, 00:15:24.353 "data_offset": 2048, 00:15:24.353 "data_size": 63488 00:15:24.353 }, 00:15:24.353 { 00:15:24.353 "name": "pt2", 00:15:24.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.353 "is_configured": true, 00:15:24.353 "data_offset": 2048, 00:15:24.353 "data_size": 63488 00:15:24.353 }, 00:15:24.353 { 00:15:24.353 "name": "pt3", 00:15:24.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.353 "is_configured": true, 00:15:24.353 "data_offset": 2048, 00:15:24.353 "data_size": 63488 00:15:24.353 }, 00:15:24.353 { 00:15:24.353 "name": "pt4", 00:15:24.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:24.353 "is_configured": true, 00:15:24.353 "data_offset": 2048, 00:15:24.353 "data_size": 63488 00:15:24.353 } 00:15:24.353 ] 00:15:24.353 }' 00:15:24.353 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.353 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 [2024-11-20 07:12:06.823220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.612 [2024-11-20 07:12:06.823323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.612 [2024-11-20 07:12:06.823456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.612 [2024-11-20 07:12:06.823568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.612 [2024-11-20 07:12:06.823628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.955 [2024-11-20 07:12:06.903103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.955 [2024-11-20 07:12:06.903269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.955 [2024-11-20 07:12:06.903314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:24.955 [2024-11-20 07:12:06.903375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.955 [2024-11-20 07:12:06.905967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.955 [2024-11-20 07:12:06.906026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.955 [2024-11-20 07:12:06.906133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:24.955 [2024-11-20 07:12:06.906196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.955 [2024-11-20 07:12:06.906366] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:24.955 [2024-11-20 07:12:06.906382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.955 [2024-11-20 07:12:06.906399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:24.955 [2024-11-20 07:12:06.906484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.955 [2024-11-20 07:12:06.906617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:24.955 pt1 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.955 "name": "raid_bdev1", 00:15:24.955 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:24.955 "strip_size_kb": 0, 00:15:24.955 "state": "configuring", 00:15:24.955 "raid_level": "raid1", 00:15:24.955 "superblock": true, 00:15:24.955 "num_base_bdevs": 4, 00:15:24.955 "num_base_bdevs_discovered": 2, 00:15:24.955 "num_base_bdevs_operational": 3, 00:15:24.955 "base_bdevs_list": [ 00:15:24.955 { 00:15:24.955 "name": null, 00:15:24.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.955 "is_configured": false, 00:15:24.955 "data_offset": 2048, 00:15:24.955 "data_size": 63488 00:15:24.955 }, 00:15:24.955 { 00:15:24.955 "name": "pt2", 00:15:24.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.955 "is_configured": true, 00:15:24.955 "data_offset": 2048, 00:15:24.955 "data_size": 63488 00:15:24.955 }, 00:15:24.955 { 00:15:24.955 "name": "pt3", 00:15:24.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.955 "is_configured": true, 00:15:24.955 "data_offset": 2048, 00:15:24.955 "data_size": 63488 00:15:24.955 }, 00:15:24.955 { 00:15:24.955 "name": null, 00:15:24.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:24.955 "is_configured": false, 00:15:24.955 "data_offset": 2048, 00:15:24.955 "data_size": 63488 00:15:24.955 } 00:15:24.955 ] 00:15:24.955 }' 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.955 07:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.228 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:25.228 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:25.228 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.229 [2024-11-20 07:12:07.474141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:25.229 [2024-11-20 07:12:07.474235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.229 [2024-11-20 07:12:07.474260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:25.229 [2024-11-20 07:12:07.474270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.229 [2024-11-20 07:12:07.474809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.229 [2024-11-20 07:12:07.474838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:25.229 [2024-11-20 07:12:07.474936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:25.229 [2024-11-20 07:12:07.474972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:25.229 [2024-11-20 07:12:07.475139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:25.229 [2024-11-20 07:12:07.475150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:25.229 [2024-11-20 07:12:07.475457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:25.229 [2024-11-20 07:12:07.475630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:25.229 [2024-11-20 07:12:07.475711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:25.229 [2024-11-20 07:12:07.475907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.229 pt4 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.229 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.489 "name": "raid_bdev1", 00:15:25.489 "uuid": "6b961311-c464-44ac-a322-2028b7e8a926", 00:15:25.489 "strip_size_kb": 0, 00:15:25.489 "state": "online", 00:15:25.489 "raid_level": "raid1", 00:15:25.489 "superblock": true, 00:15:25.489 "num_base_bdevs": 4, 00:15:25.489 "num_base_bdevs_discovered": 3, 00:15:25.489 "num_base_bdevs_operational": 3, 00:15:25.489 "base_bdevs_list": [ 00:15:25.489 { 00:15:25.489 "name": null, 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.489 "is_configured": false, 00:15:25.489 "data_offset": 2048, 00:15:25.489 "data_size": 63488 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "pt2", 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 2048, 00:15:25.489 "data_size": 63488 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "pt3", 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 2048, 00:15:25.489 "data_size": 63488 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "pt4", 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 2048, 00:15:25.489 "data_size": 63488 00:15:25.489 } 00:15:25.489 ] 00:15:25.489 }' 00:15:25.489 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.489 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:25.748 [2024-11-20 07:12:07.977703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.748 07:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6b961311-c464-44ac-a322-2028b7e8a926 '!=' 6b961311-c464-44ac-a322-2028b7e8a926 ']' 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74884 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74884 ']' 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74884 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74884 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74884' 00:15:26.008 killing process with pid 74884 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74884 00:15:26.008 [2024-11-20 07:12:08.067449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.008 07:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74884 00:15:26.008 [2024-11-20 07:12:08.067653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.008 [2024-11-20 07:12:08.067744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.008 [2024-11-20 07:12:08.067757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:26.575 [2024-11-20 07:12:08.540864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.952 07:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:27.952 00:15:27.952 real 0m9.141s 00:15:27.952 user 0m14.054s 00:15:27.952 sys 0m1.850s 00:15:27.952 07:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.952 07:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.952 ************************************ 00:15:27.952 END TEST raid_superblock_test 00:15:27.952 ************************************ 00:15:27.952 07:12:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:27.952 07:12:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:27.952 07:12:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.952 07:12:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.952 ************************************ 00:15:27.952 START TEST raid_read_error_test 00:15:27.952 ************************************ 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:27.952 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hof1wGY5Io 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75374 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75374 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75374 ']' 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.953 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.953 [2024-11-20 07:12:10.038484] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:27.953 [2024-11-20 07:12:10.038721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75374 ] 00:15:28.211 [2024-11-20 07:12:10.220225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.211 [2024-11-20 07:12:10.359503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.468 [2024-11-20 07:12:10.601124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.468 [2024-11-20 07:12:10.601202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 BaseBdev1_malloc 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 true 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 [2024-11-20 07:12:10.969794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:28.727 [2024-11-20 07:12:10.969993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.727 [2024-11-20 07:12:10.970056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:28.727 [2024-11-20 07:12:10.970116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.727 [2024-11-20 07:12:10.973247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.727 [2024-11-20 07:12:10.973307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.727 BaseBdev1 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.727 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 BaseBdev2_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 true 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 [2024-11-20 07:12:11.044611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:29.101 [2024-11-20 07:12:11.044775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.101 [2024-11-20 07:12:11.044819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:29.101 [2024-11-20 07:12:11.044857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.101 [2024-11-20 07:12:11.047401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.101 [2024-11-20 07:12:11.047496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.101 BaseBdev2 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 BaseBdev3_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 true 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 [2024-11-20 07:12:11.128917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:29.101 [2024-11-20 07:12:11.129015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.101 [2024-11-20 07:12:11.129036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:29.101 [2024-11-20 07:12:11.129049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.101 [2024-11-20 07:12:11.131425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.101 [2024-11-20 07:12:11.131468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:29.101 BaseBdev3 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 BaseBdev4_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 true 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 [2024-11-20 07:12:11.201407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:29.101 [2024-11-20 07:12:11.201583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.101 [2024-11-20 07:12:11.201631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:29.101 [2024-11-20 07:12:11.201686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.101 [2024-11-20 07:12:11.204357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.101 [2024-11-20 07:12:11.204444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:29.101 BaseBdev4 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 [2024-11-20 07:12:11.213460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.101 [2024-11-20 07:12:11.215586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.101 [2024-11-20 07:12:11.215722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.101 [2024-11-20 07:12:11.215839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.101 [2024-11-20 07:12:11.216160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:29.101 [2024-11-20 07:12:11.216217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:29.101 [2024-11-20 07:12:11.216547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:29.101 [2024-11-20 07:12:11.216788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:29.101 [2024-11-20 07:12:11.216834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:29.101 [2024-11-20 07:12:11.217073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.101 "name": "raid_bdev1", 00:15:29.101 "uuid": "8a147d5f-2139-4f49-acc4-3b22fb2f7c41", 00:15:29.101 "strip_size_kb": 0, 00:15:29.101 "state": "online", 00:15:29.101 "raid_level": "raid1", 00:15:29.101 "superblock": true, 00:15:29.101 "num_base_bdevs": 4, 00:15:29.101 "num_base_bdevs_discovered": 4, 00:15:29.101 "num_base_bdevs_operational": 4, 00:15:29.101 "base_bdevs_list": [ 00:15:29.101 { 00:15:29.101 "name": "BaseBdev1", 00:15:29.101 "uuid": "180eed88-eed3-589e-8171-b6ec121ef997", 00:15:29.101 "is_configured": true, 00:15:29.101 "data_offset": 2048, 00:15:29.101 "data_size": 63488 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "name": "BaseBdev2", 00:15:29.101 "uuid": "af4d26b4-c31d-5486-8414-96a3340f6e4c", 00:15:29.101 "is_configured": true, 00:15:29.101 "data_offset": 2048, 00:15:29.101 "data_size": 63488 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "name": "BaseBdev3", 00:15:29.102 "uuid": "3e4cc521-3d18-5b75-80f7-cd6dfd8184df", 00:15:29.102 "is_configured": true, 00:15:29.102 "data_offset": 2048, 00:15:29.102 "data_size": 63488 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "name": "BaseBdev4", 00:15:29.102 "uuid": "911f517c-325f-53ad-9ed2-454cc0c750e8", 00:15:29.102 "is_configured": true, 00:15:29.102 "data_offset": 2048, 00:15:29.102 "data_size": 63488 00:15:29.102 } 00:15:29.102 ] 00:15:29.102 }' 00:15:29.102 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.102 07:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.668 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:29.668 07:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:29.668 [2024-11-20 07:12:11.778319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.600 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.600 "name": "raid_bdev1", 00:15:30.600 "uuid": "8a147d5f-2139-4f49-acc4-3b22fb2f7c41", 00:15:30.601 "strip_size_kb": 0, 00:15:30.601 "state": "online", 00:15:30.601 "raid_level": "raid1", 00:15:30.601 "superblock": true, 00:15:30.601 "num_base_bdevs": 4, 00:15:30.601 "num_base_bdevs_discovered": 4, 00:15:30.601 "num_base_bdevs_operational": 4, 00:15:30.601 "base_bdevs_list": [ 00:15:30.601 { 00:15:30.601 "name": "BaseBdev1", 00:15:30.601 "uuid": "180eed88-eed3-589e-8171-b6ec121ef997", 00:15:30.601 "is_configured": true, 00:15:30.601 "data_offset": 2048, 00:15:30.601 "data_size": 63488 00:15:30.601 }, 00:15:30.601 { 00:15:30.601 "name": "BaseBdev2", 00:15:30.601 "uuid": "af4d26b4-c31d-5486-8414-96a3340f6e4c", 00:15:30.601 "is_configured": true, 00:15:30.601 "data_offset": 2048, 00:15:30.601 "data_size": 63488 00:15:30.601 }, 00:15:30.601 { 00:15:30.601 "name": "BaseBdev3", 00:15:30.601 "uuid": "3e4cc521-3d18-5b75-80f7-cd6dfd8184df", 00:15:30.601 "is_configured": true, 00:15:30.601 "data_offset": 2048, 00:15:30.601 "data_size": 63488 00:15:30.601 }, 00:15:30.601 { 00:15:30.601 "name": "BaseBdev4", 00:15:30.601 "uuid": "911f517c-325f-53ad-9ed2-454cc0c750e8", 00:15:30.601 "is_configured": true, 00:15:30.601 "data_offset": 2048, 00:15:30.601 "data_size": 63488 00:15:30.601 } 00:15:30.601 ] 00:15:30.601 }' 00:15:30.601 07:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.601 07:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.168 07:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.168 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.168 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.168 [2024-11-20 07:12:13.146220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.168 [2024-11-20 07:12:13.146409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.168 [2024-11-20 07:12:13.149701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.168 [2024-11-20 07:12:13.149819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.168 [2024-11-20 07:12:13.149984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.168 [2024-11-20 07:12:13.150044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:15:31.168 "results": [ 00:15:31.168 { 00:15:31.168 "job": "raid_bdev1", 00:15:31.168 "core_mask": "0x1", 00:15:31.168 "workload": "randrw", 00:15:31.168 "percentage": 50, 00:15:31.168 "status": "finished", 00:15:31.168 "queue_depth": 1, 00:15:31.168 "io_size": 131072, 00:15:31.168 "runtime": 1.368538, 00:15:31.168 "iops": 9236.133742723987, 00:15:31.168 "mibps": 1154.5167178404984, 00:15:31.168 "io_failed": 0, 00:15:31.168 "io_timeout": 0, 00:15:31.168 "avg_latency_us": 105.07133491791498, 00:15:31.168 "min_latency_us": 25.041048034934498, 00:15:31.168 "max_latency_us": 1688.482096069869 00:15:31.169 } 00:15:31.169 ], 00:15:31.169 "core_count": 1 00:15:31.169 } 00:15:31.169 te offline 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75374 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75374 ']' 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75374 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75374 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.169 killing process with pid 75374 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75374' 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75374 00:15:31.169 [2024-11-20 07:12:13.190417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.169 07:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75374 00:15:31.432 [2024-11-20 07:12:13.579553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hof1wGY5Io 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:32.830 00:15:32.830 real 0m4.970s 00:15:32.830 user 0m5.849s 00:15:32.830 sys 0m0.629s 00:15:32.830 ************************************ 00:15:32.830 END TEST raid_read_error_test 00:15:32.830 ************************************ 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.830 07:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.830 07:12:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:32.830 07:12:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.830 07:12:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.830 07:12:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.830 ************************************ 00:15:32.830 START TEST raid_write_error_test 00:15:32.830 ************************************ 00:15:32.830 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WEP1Oo1mWs 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75526 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75526 00:15:32.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75526 ']' 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.831 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.831 [2024-11-20 07:12:15.085803] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:32.831 [2024-11-20 07:12:15.085958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75526 ] 00:15:33.091 [2024-11-20 07:12:15.273979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.350 [2024-11-20 07:12:15.402250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.609 [2024-11-20 07:12:15.630822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.609 [2024-11-20 07:12:15.630894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.870 BaseBdev1_malloc 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.870 true 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.870 [2024-11-20 07:12:16.078509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:33.870 [2024-11-20 07:12:16.078688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.870 [2024-11-20 07:12:16.078735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:33.870 [2024-11-20 07:12:16.078772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.870 [2024-11-20 07:12:16.081331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.870 [2024-11-20 07:12:16.081447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.870 BaseBdev1 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.870 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 BaseBdev2_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 true 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 [2024-11-20 07:12:16.152858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:34.130 [2024-11-20 07:12:16.153036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.130 [2024-11-20 07:12:16.153062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:34.130 [2024-11-20 07:12:16.153074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.130 [2024-11-20 07:12:16.155587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.130 [2024-11-20 07:12:16.155636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:34.130 BaseBdev2 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 BaseBdev3_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 true 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 [2024-11-20 07:12:16.237478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:34.130 [2024-11-20 07:12:16.237643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.130 [2024-11-20 07:12:16.237685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:34.130 [2024-11-20 07:12:16.237743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.130 [2024-11-20 07:12:16.240034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.130 [2024-11-20 07:12:16.240115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:34.130 BaseBdev3 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 BaseBdev4_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 true 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 [2024-11-20 07:12:16.311732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:34.130 [2024-11-20 07:12:16.311903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.130 [2024-11-20 07:12:16.311955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:34.130 [2024-11-20 07:12:16.312007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.130 [2024-11-20 07:12:16.314555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.130 [2024-11-20 07:12:16.314659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:34.130 BaseBdev4 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 [2024-11-20 07:12:16.323783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.130 [2024-11-20 07:12:16.325968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.130 [2024-11-20 07:12:16.326110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.130 [2024-11-20 07:12:16.326237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:34.130 [2024-11-20 07:12:16.326553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:34.130 [2024-11-20 07:12:16.326609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.130 [2024-11-20 07:12:16.326920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:34.130 [2024-11-20 07:12:16.327153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:34.130 [2024-11-20 07:12:16.327198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:34.130 [2024-11-20 07:12:16.327449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.130 "name": "raid_bdev1", 00:15:34.130 "uuid": "77188077-6c47-4dfd-9e28-03f8423acb96", 00:15:34.130 "strip_size_kb": 0, 00:15:34.130 "state": "online", 00:15:34.130 "raid_level": "raid1", 00:15:34.130 "superblock": true, 00:15:34.130 "num_base_bdevs": 4, 00:15:34.130 "num_base_bdevs_discovered": 4, 00:15:34.130 "num_base_bdevs_operational": 4, 00:15:34.130 "base_bdevs_list": [ 00:15:34.130 { 00:15:34.130 "name": "BaseBdev1", 00:15:34.130 "uuid": "b0352fbd-aa29-58b7-8cc9-156f2bfa1b32", 00:15:34.130 "is_configured": true, 00:15:34.130 "data_offset": 2048, 00:15:34.130 "data_size": 63488 00:15:34.130 }, 00:15:34.130 { 00:15:34.130 "name": "BaseBdev2", 00:15:34.130 "uuid": "338e51b5-5fd1-58e0-8f62-fe222bb08111", 00:15:34.130 "is_configured": true, 00:15:34.130 "data_offset": 2048, 00:15:34.130 "data_size": 63488 00:15:34.130 }, 00:15:34.130 { 00:15:34.130 "name": "BaseBdev3", 00:15:34.130 "uuid": "b9a0bd76-d5f2-5c3b-8fb4-7cab932e4649", 00:15:34.130 "is_configured": true, 00:15:34.130 "data_offset": 2048, 00:15:34.130 "data_size": 63488 00:15:34.130 }, 00:15:34.130 { 00:15:34.130 "name": "BaseBdev4", 00:15:34.130 "uuid": "5ce21679-9c14-5f9f-9047-25a5f6f42e3c", 00:15:34.130 "is_configured": true, 00:15:34.130 "data_offset": 2048, 00:15:34.130 "data_size": 63488 00:15:34.130 } 00:15:34.130 ] 00:15:34.130 }' 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.130 07:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:34.699 07:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:34.699 [2024-11-20 07:12:16.899985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.704 [2024-11-20 07:12:17.804040] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:35.704 [2024-11-20 07:12:17.804230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.704 [2024-11-20 07:12:17.804493] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.704 "name": "raid_bdev1", 00:15:35.704 "uuid": "77188077-6c47-4dfd-9e28-03f8423acb96", 00:15:35.704 "strip_size_kb": 0, 00:15:35.704 "state": "online", 00:15:35.704 "raid_level": "raid1", 00:15:35.704 "superblock": true, 00:15:35.704 "num_base_bdevs": 4, 00:15:35.704 "num_base_bdevs_discovered": 3, 00:15:35.704 "num_base_bdevs_operational": 3, 00:15:35.704 "base_bdevs_list": [ 00:15:35.704 { 00:15:35.704 "name": null, 00:15:35.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.704 "is_configured": false, 00:15:35.704 "data_offset": 0, 00:15:35.704 "data_size": 63488 00:15:35.704 }, 00:15:35.704 { 00:15:35.704 "name": "BaseBdev2", 00:15:35.704 "uuid": "338e51b5-5fd1-58e0-8f62-fe222bb08111", 00:15:35.704 "is_configured": true, 00:15:35.704 "data_offset": 2048, 00:15:35.704 "data_size": 63488 00:15:35.704 }, 00:15:35.704 { 00:15:35.704 "name": "BaseBdev3", 00:15:35.704 "uuid": "b9a0bd76-d5f2-5c3b-8fb4-7cab932e4649", 00:15:35.704 "is_configured": true, 00:15:35.704 "data_offset": 2048, 00:15:35.704 "data_size": 63488 00:15:35.704 }, 00:15:35.704 { 00:15:35.704 "name": "BaseBdev4", 00:15:35.704 "uuid": "5ce21679-9c14-5f9f-9047-25a5f6f42e3c", 00:15:35.704 "is_configured": true, 00:15:35.704 "data_offset": 2048, 00:15:35.704 "data_size": 63488 00:15:35.704 } 00:15:35.704 ] 00:15:35.704 }' 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.704 07:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.273 [2024-11-20 07:12:18.298021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.273 [2024-11-20 07:12:18.298072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.273 [2024-11-20 07:12:18.300778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.273 [2024-11-20 07:12:18.300827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.273 [2024-11-20 07:12:18.300931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.273 [2024-11-20 07:12:18.300942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75526 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75526 ']' 00:15:36.273 { 00:15:36.273 "results": [ 00:15:36.273 { 00:15:36.273 "job": "raid_bdev1", 00:15:36.273 "core_mask": "0x1", 00:15:36.273 "workload": "randrw", 00:15:36.273 "percentage": 50, 00:15:36.273 "status": "finished", 00:15:36.273 "queue_depth": 1, 00:15:36.273 "io_size": 131072, 00:15:36.273 "runtime": 1.398902, 00:15:36.273 "iops": 10789.890928742685, 00:15:36.273 "mibps": 1348.7363660928356, 00:15:36.273 "io_failed": 0, 00:15:36.273 "io_timeout": 0, 00:15:36.273 "avg_latency_us": 89.73107264345762, 00:15:36.273 "min_latency_us": 24.146724890829695, 00:15:36.273 "max_latency_us": 1695.6366812227075 00:15:36.273 } 00:15:36.273 ], 00:15:36.273 "core_count": 1 00:15:36.273 } 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75526 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75526 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.273 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75526' 00:15:36.273 killing process with pid 75526 00:15:36.274 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75526 00:15:36.274 [2024-11-20 07:12:18.338432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.274 07:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75526 00:15:36.532 [2024-11-20 07:12:18.709450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.911 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WEP1Oo1mWs 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:37.912 ************************************ 00:15:37.912 END TEST raid_write_error_test 00:15:37.912 ************************************ 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:37.912 00:15:37.912 real 0m5.091s 00:15:37.912 user 0m6.048s 00:15:37.912 sys 0m0.644s 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.912 07:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.912 07:12:20 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:37.912 07:12:20 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:37.912 07:12:20 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:37.912 07:12:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:37.912 07:12:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.912 07:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.912 ************************************ 00:15:37.912 START TEST raid_rebuild_test 00:15:37.912 ************************************ 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75670 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75670 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75670 ']' 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.912 07:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.171 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.171 Zero copy mechanism will not be used. 00:15:38.171 [2024-11-20 07:12:20.240097] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:38.171 [2024-11-20 07:12:20.240230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75670 ] 00:15:38.171 [2024-11-20 07:12:20.422119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.430 [2024-11-20 07:12:20.558417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.690 [2024-11-20 07:12:20.777899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.690 [2024-11-20 07:12:20.777964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.950 BaseBdev1_malloc 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.950 [2024-11-20 07:12:21.149937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.950 [2024-11-20 07:12:21.150085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.950 [2024-11-20 07:12:21.150140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:38.950 [2024-11-20 07:12:21.150186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.950 [2024-11-20 07:12:21.152439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.950 [2024-11-20 07:12:21.152510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.950 BaseBdev1 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.950 BaseBdev2_malloc 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.950 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.950 [2024-11-20 07:12:21.209878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:38.950 [2024-11-20 07:12:21.210008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.950 [2024-11-20 07:12:21.210051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:38.950 [2024-11-20 07:12:21.210087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.950 [2024-11-20 07:12:21.212526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.950 [2024-11-20 07:12:21.212630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.210 BaseBdev2 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 spare_malloc 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 spare_delay 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 [2024-11-20 07:12:21.291409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.210 [2024-11-20 07:12:21.291536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.210 [2024-11-20 07:12:21.291592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:39.210 [2024-11-20 07:12:21.291629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.210 [2024-11-20 07:12:21.294012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.210 [2024-11-20 07:12:21.294096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.210 spare 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 [2024-11-20 07:12:21.303458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.210 [2024-11-20 07:12:21.305525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.210 [2024-11-20 07:12:21.305675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.210 [2024-11-20 07:12:21.305695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:39.210 [2024-11-20 07:12:21.306004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:39.210 [2024-11-20 07:12:21.306192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.210 [2024-11-20 07:12:21.306204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.210 [2024-11-20 07:12:21.306412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.210 "name": "raid_bdev1", 00:15:39.210 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:39.210 "strip_size_kb": 0, 00:15:39.210 "state": "online", 00:15:39.210 "raid_level": "raid1", 00:15:39.210 "superblock": false, 00:15:39.210 "num_base_bdevs": 2, 00:15:39.210 "num_base_bdevs_discovered": 2, 00:15:39.210 "num_base_bdevs_operational": 2, 00:15:39.210 "base_bdevs_list": [ 00:15:39.210 { 00:15:39.210 "name": "BaseBdev1", 00:15:39.210 "uuid": "07ea617c-41b0-5dff-b706-0adc545bc885", 00:15:39.210 "is_configured": true, 00:15:39.210 "data_offset": 0, 00:15:39.210 "data_size": 65536 00:15:39.210 }, 00:15:39.210 { 00:15:39.210 "name": "BaseBdev2", 00:15:39.210 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:39.210 "is_configured": true, 00:15:39.210 "data_offset": 0, 00:15:39.210 "data_size": 65536 00:15:39.210 } 00:15:39.210 ] 00:15:39.210 }' 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.210 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.779 [2024-11-20 07:12:21.770962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.779 07:12:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.039 [2024-11-20 07:12:22.122169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:40.039 /dev/nbd0 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.039 1+0 records in 00:15:40.039 1+0 records out 00:15:40.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044062 s, 9.3 MB/s 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:40.039 07:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:45.315 65536+0 records in 00:15:45.315 65536+0 records out 00:15:45.315 33554432 bytes (34 MB, 32 MiB) copied, 4.7046 s, 7.1 MB/s 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.315 07:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.315 [2024-11-20 07:12:27.173204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.315 [2024-11-20 07:12:27.213481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.315 "name": "raid_bdev1", 00:15:45.315 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:45.315 "strip_size_kb": 0, 00:15:45.315 "state": "online", 00:15:45.315 "raid_level": "raid1", 00:15:45.315 "superblock": false, 00:15:45.315 "num_base_bdevs": 2, 00:15:45.315 "num_base_bdevs_discovered": 1, 00:15:45.315 "num_base_bdevs_operational": 1, 00:15:45.315 "base_bdevs_list": [ 00:15:45.315 { 00:15:45.315 "name": null, 00:15:45.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.315 "is_configured": false, 00:15:45.315 "data_offset": 0, 00:15:45.315 "data_size": 65536 00:15:45.315 }, 00:15:45.315 { 00:15:45.315 "name": "BaseBdev2", 00:15:45.315 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:45.315 "is_configured": true, 00:15:45.315 "data_offset": 0, 00:15:45.315 "data_size": 65536 00:15:45.315 } 00:15:45.315 ] 00:15:45.315 }' 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.315 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.574 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.574 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.574 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.574 [2024-11-20 07:12:27.688733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.574 [2024-11-20 07:12:27.709375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:45.574 07:12:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.574 07:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:45.574 [2024-11-20 07:12:27.711397] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.506 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.506 "name": "raid_bdev1", 00:15:46.506 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:46.506 "strip_size_kb": 0, 00:15:46.506 "state": "online", 00:15:46.506 "raid_level": "raid1", 00:15:46.506 "superblock": false, 00:15:46.506 "num_base_bdevs": 2, 00:15:46.506 "num_base_bdevs_discovered": 2, 00:15:46.507 "num_base_bdevs_operational": 2, 00:15:46.507 "process": { 00:15:46.507 "type": "rebuild", 00:15:46.507 "target": "spare", 00:15:46.507 "progress": { 00:15:46.507 "blocks": 20480, 00:15:46.507 "percent": 31 00:15:46.507 } 00:15:46.507 }, 00:15:46.507 "base_bdevs_list": [ 00:15:46.507 { 00:15:46.507 "name": "spare", 00:15:46.507 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:46.507 "is_configured": true, 00:15:46.507 "data_offset": 0, 00:15:46.507 "data_size": 65536 00:15:46.507 }, 00:15:46.507 { 00:15:46.507 "name": "BaseBdev2", 00:15:46.507 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:46.507 "is_configured": true, 00:15:46.507 "data_offset": 0, 00:15:46.507 "data_size": 65536 00:15:46.507 } 00:15:46.507 ] 00:15:46.507 }' 00:15:46.507 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.764 [2024-11-20 07:12:28.847054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.764 [2024-11-20 07:12:28.917404] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.764 [2024-11-20 07:12:28.917593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.764 [2024-11-20 07:12:28.917613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.764 [2024-11-20 07:12:28.917624] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.764 07:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.764 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.764 "name": "raid_bdev1", 00:15:46.764 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:46.764 "strip_size_kb": 0, 00:15:46.764 "state": "online", 00:15:46.764 "raid_level": "raid1", 00:15:46.764 "superblock": false, 00:15:46.764 "num_base_bdevs": 2, 00:15:46.764 "num_base_bdevs_discovered": 1, 00:15:46.764 "num_base_bdevs_operational": 1, 00:15:46.764 "base_bdevs_list": [ 00:15:46.764 { 00:15:46.764 "name": null, 00:15:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.764 "is_configured": false, 00:15:46.764 "data_offset": 0, 00:15:46.764 "data_size": 65536 00:15:46.764 }, 00:15:46.764 { 00:15:46.764 "name": "BaseBdev2", 00:15:46.764 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:46.764 "is_configured": true, 00:15:46.764 "data_offset": 0, 00:15:46.764 "data_size": 65536 00:15:46.764 } 00:15:46.764 ] 00:15:46.764 }' 00:15:46.764 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.764 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.330 "name": "raid_bdev1", 00:15:47.330 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:47.330 "strip_size_kb": 0, 00:15:47.330 "state": "online", 00:15:47.330 "raid_level": "raid1", 00:15:47.330 "superblock": false, 00:15:47.330 "num_base_bdevs": 2, 00:15:47.330 "num_base_bdevs_discovered": 1, 00:15:47.330 "num_base_bdevs_operational": 1, 00:15:47.330 "base_bdevs_list": [ 00:15:47.330 { 00:15:47.330 "name": null, 00:15:47.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.330 "is_configured": false, 00:15:47.330 "data_offset": 0, 00:15:47.330 "data_size": 65536 00:15:47.330 }, 00:15:47.330 { 00:15:47.330 "name": "BaseBdev2", 00:15:47.330 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:47.330 "is_configured": true, 00:15:47.330 "data_offset": 0, 00:15:47.330 "data_size": 65536 00:15:47.330 } 00:15:47.330 ] 00:15:47.330 }' 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.330 [2024-11-20 07:12:29.529714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.330 [2024-11-20 07:12:29.548542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.330 07:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:47.330 [2024-11-20 07:12:29.550828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.700 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.700 "name": "raid_bdev1", 00:15:48.700 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:48.700 "strip_size_kb": 0, 00:15:48.700 "state": "online", 00:15:48.700 "raid_level": "raid1", 00:15:48.700 "superblock": false, 00:15:48.700 "num_base_bdevs": 2, 00:15:48.700 "num_base_bdevs_discovered": 2, 00:15:48.700 "num_base_bdevs_operational": 2, 00:15:48.700 "process": { 00:15:48.700 "type": "rebuild", 00:15:48.700 "target": "spare", 00:15:48.700 "progress": { 00:15:48.700 "blocks": 20480, 00:15:48.700 "percent": 31 00:15:48.700 } 00:15:48.700 }, 00:15:48.700 "base_bdevs_list": [ 00:15:48.700 { 00:15:48.700 "name": "spare", 00:15:48.700 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:48.700 "is_configured": true, 00:15:48.700 "data_offset": 0, 00:15:48.701 "data_size": 65536 00:15:48.701 }, 00:15:48.701 { 00:15:48.701 "name": "BaseBdev2", 00:15:48.701 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:48.701 "is_configured": true, 00:15:48.701 "data_offset": 0, 00:15:48.701 "data_size": 65536 00:15:48.701 } 00:15:48.701 ] 00:15:48.701 }' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.701 "name": "raid_bdev1", 00:15:48.701 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:48.701 "strip_size_kb": 0, 00:15:48.701 "state": "online", 00:15:48.701 "raid_level": "raid1", 00:15:48.701 "superblock": false, 00:15:48.701 "num_base_bdevs": 2, 00:15:48.701 "num_base_bdevs_discovered": 2, 00:15:48.701 "num_base_bdevs_operational": 2, 00:15:48.701 "process": { 00:15:48.701 "type": "rebuild", 00:15:48.701 "target": "spare", 00:15:48.701 "progress": { 00:15:48.701 "blocks": 22528, 00:15:48.701 "percent": 34 00:15:48.701 } 00:15:48.701 }, 00:15:48.701 "base_bdevs_list": [ 00:15:48.701 { 00:15:48.701 "name": "spare", 00:15:48.701 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:48.701 "is_configured": true, 00:15:48.701 "data_offset": 0, 00:15:48.701 "data_size": 65536 00:15:48.701 }, 00:15:48.701 { 00:15:48.701 "name": "BaseBdev2", 00:15:48.701 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:48.701 "is_configured": true, 00:15:48.701 "data_offset": 0, 00:15:48.701 "data_size": 65536 00:15:48.701 } 00:15:48.701 ] 00:15:48.701 }' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.701 07:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.632 "name": "raid_bdev1", 00:15:49.632 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:49.632 "strip_size_kb": 0, 00:15:49.632 "state": "online", 00:15:49.632 "raid_level": "raid1", 00:15:49.632 "superblock": false, 00:15:49.632 "num_base_bdevs": 2, 00:15:49.632 "num_base_bdevs_discovered": 2, 00:15:49.632 "num_base_bdevs_operational": 2, 00:15:49.632 "process": { 00:15:49.632 "type": "rebuild", 00:15:49.632 "target": "spare", 00:15:49.632 "progress": { 00:15:49.632 "blocks": 45056, 00:15:49.632 "percent": 68 00:15:49.632 } 00:15:49.632 }, 00:15:49.632 "base_bdevs_list": [ 00:15:49.632 { 00:15:49.632 "name": "spare", 00:15:49.632 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:49.632 "is_configured": true, 00:15:49.632 "data_offset": 0, 00:15:49.632 "data_size": 65536 00:15:49.632 }, 00:15:49.632 { 00:15:49.632 "name": "BaseBdev2", 00:15:49.632 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:49.632 "is_configured": true, 00:15:49.632 "data_offset": 0, 00:15:49.632 "data_size": 65536 00:15:49.632 } 00:15:49.632 ] 00:15:49.632 }' 00:15:49.632 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.889 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.889 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.889 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.889 07:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.822 [2024-11-20 07:12:32.766178] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:50.822 [2024-11-20 07:12:32.766378] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:50.822 [2024-11-20 07:12:32.766437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.822 07:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.822 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.822 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.822 "name": "raid_bdev1", 00:15:50.822 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:50.822 "strip_size_kb": 0, 00:15:50.822 "state": "online", 00:15:50.822 "raid_level": "raid1", 00:15:50.822 "superblock": false, 00:15:50.822 "num_base_bdevs": 2, 00:15:50.822 "num_base_bdevs_discovered": 2, 00:15:50.822 "num_base_bdevs_operational": 2, 00:15:50.822 "base_bdevs_list": [ 00:15:50.822 { 00:15:50.822 "name": "spare", 00:15:50.822 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:50.822 "is_configured": true, 00:15:50.822 "data_offset": 0, 00:15:50.822 "data_size": 65536 00:15:50.822 }, 00:15:50.822 { 00:15:50.822 "name": "BaseBdev2", 00:15:50.822 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:50.822 "is_configured": true, 00:15:50.822 "data_offset": 0, 00:15:50.822 "data_size": 65536 00:15:50.822 } 00:15:50.822 ] 00:15:50.822 }' 00:15:50.822 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.822 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.822 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.080 "name": "raid_bdev1", 00:15:51.080 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:51.080 "strip_size_kb": 0, 00:15:51.080 "state": "online", 00:15:51.080 "raid_level": "raid1", 00:15:51.080 "superblock": false, 00:15:51.080 "num_base_bdevs": 2, 00:15:51.080 "num_base_bdevs_discovered": 2, 00:15:51.080 "num_base_bdevs_operational": 2, 00:15:51.080 "base_bdevs_list": [ 00:15:51.080 { 00:15:51.080 "name": "spare", 00:15:51.080 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:51.080 "is_configured": true, 00:15:51.080 "data_offset": 0, 00:15:51.080 "data_size": 65536 00:15:51.080 }, 00:15:51.080 { 00:15:51.080 "name": "BaseBdev2", 00:15:51.080 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:51.080 "is_configured": true, 00:15:51.080 "data_offset": 0, 00:15:51.080 "data_size": 65536 00:15:51.080 } 00:15:51.080 ] 00:15:51.080 }' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.080 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.080 "name": "raid_bdev1", 00:15:51.080 "uuid": "4a9c0d43-9733-4a9f-8fa2-fc59845ab0ca", 00:15:51.080 "strip_size_kb": 0, 00:15:51.080 "state": "online", 00:15:51.080 "raid_level": "raid1", 00:15:51.080 "superblock": false, 00:15:51.080 "num_base_bdevs": 2, 00:15:51.080 "num_base_bdevs_discovered": 2, 00:15:51.080 "num_base_bdevs_operational": 2, 00:15:51.080 "base_bdevs_list": [ 00:15:51.080 { 00:15:51.080 "name": "spare", 00:15:51.080 "uuid": "b7361781-2afe-5d12-a458-5b3495a794f3", 00:15:51.080 "is_configured": true, 00:15:51.080 "data_offset": 0, 00:15:51.080 "data_size": 65536 00:15:51.080 }, 00:15:51.080 { 00:15:51.080 "name": "BaseBdev2", 00:15:51.080 "uuid": "95cf8ed3-7339-5d9a-8612-6c094a1023dd", 00:15:51.081 "is_configured": true, 00:15:51.081 "data_offset": 0, 00:15:51.081 "data_size": 65536 00:15:51.081 } 00:15:51.081 ] 00:15:51.081 }' 00:15:51.081 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.081 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.647 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.647 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.648 [2024-11-20 07:12:33.718565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.648 [2024-11-20 07:12:33.718665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.648 [2024-11-20 07:12:33.718785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.648 [2024-11-20 07:12:33.718889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.648 [2024-11-20 07:12:33.718938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.648 07:12:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:51.906 /dev/nbd0 00:15:51.906 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.906 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.907 1+0 records in 00:15:51.907 1+0 records out 00:15:51.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056835 s, 7.2 MB/s 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.907 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:52.166 /dev/nbd1 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.166 1+0 records in 00:15:52.166 1+0 records out 00:15:52.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599712 s, 6.8 MB/s 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.166 07:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.424 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.682 07:12:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75670 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75670 ']' 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75670 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75670 00:15:52.940 killing process with pid 75670 00:15:52.940 Received shutdown signal, test time was about 60.000000 seconds 00:15:52.940 00:15:52.940 Latency(us) 00:15:52.940 [2024-11-20T07:12:35.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.940 [2024-11-20T07:12:35.205Z] =================================================================================================================== 00:15:52.940 [2024-11-20T07:12:35.205Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75670' 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75670 00:15:52.940 [2024-11-20 07:12:35.121626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.940 07:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75670 00:15:53.198 [2024-11-20 07:12:35.436631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.569 00:15:54.569 real 0m16.566s 00:15:54.569 user 0m18.562s 00:15:54.569 sys 0m3.290s 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.569 ************************************ 00:15:54.569 END TEST raid_rebuild_test 00:15:54.569 ************************************ 00:15:54.569 07:12:36 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:54.569 07:12:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.569 07:12:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.569 07:12:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.569 ************************************ 00:15:54.569 START TEST raid_rebuild_test_sb 00:15:54.569 ************************************ 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.569 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76105 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76105 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76105 ']' 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.570 07:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.828 [2024-11-20 07:12:36.860233] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:54.828 [2024-11-20 07:12:36.860468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76105 ] 00:15:54.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.828 Zero copy mechanism will not be used. 00:15:54.828 [2024-11-20 07:12:37.022550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.085 [2024-11-20 07:12:37.166968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.343 [2024-11-20 07:12:37.430464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.343 [2024-11-20 07:12:37.430624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.601 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 BaseBdev1_malloc 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 [2024-11-20 07:12:37.890647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.862 [2024-11-20 07:12:37.890818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.862 [2024-11-20 07:12:37.890876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.862 [2024-11-20 07:12:37.890942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.862 [2024-11-20 07:12:37.893779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.862 [2024-11-20 07:12:37.893898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.862 BaseBdev1 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 BaseBdev2_malloc 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 [2024-11-20 07:12:37.959544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.862 [2024-11-20 07:12:37.959652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.862 [2024-11-20 07:12:37.959691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.862 [2024-11-20 07:12:37.959715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.862 [2024-11-20 07:12:37.962689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.862 [2024-11-20 07:12:37.962823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.862 BaseBdev2 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 spare_malloc 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 spare_delay 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 [2024-11-20 07:12:38.046776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.862 [2024-11-20 07:12:38.046870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.862 [2024-11-20 07:12:38.046901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:55.862 [2024-11-20 07:12:38.046917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.862 [2024-11-20 07:12:38.049693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.862 [2024-11-20 07:12:38.049750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.862 spare 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 [2024-11-20 07:12:38.058842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.862 [2024-11-20 07:12:38.061114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.862 [2024-11-20 07:12:38.061441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.862 [2024-11-20 07:12:38.061469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.862 [2024-11-20 07:12:38.061800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:55.862 [2024-11-20 07:12:38.062023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.862 [2024-11-20 07:12:38.062035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.862 [2024-11-20 07:12:38.062244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.862 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.862 "name": "raid_bdev1", 00:15:55.862 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:15:55.862 "strip_size_kb": 0, 00:15:55.862 "state": "online", 00:15:55.862 "raid_level": "raid1", 00:15:55.862 "superblock": true, 00:15:55.862 "num_base_bdevs": 2, 00:15:55.862 "num_base_bdevs_discovered": 2, 00:15:55.862 "num_base_bdevs_operational": 2, 00:15:55.862 "base_bdevs_list": [ 00:15:55.862 { 00:15:55.862 "name": "BaseBdev1", 00:15:55.862 "uuid": "4de76654-c70f-5bc2-8a29-9777a9b3bb14", 00:15:55.862 "is_configured": true, 00:15:55.862 "data_offset": 2048, 00:15:55.862 "data_size": 63488 00:15:55.862 }, 00:15:55.863 { 00:15:55.863 "name": "BaseBdev2", 00:15:55.863 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:15:55.863 "is_configured": true, 00:15:55.863 "data_offset": 2048, 00:15:55.863 "data_size": 63488 00:15:55.863 } 00:15:55.863 ] 00:15:55.863 }' 00:15:55.863 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.863 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.431 [2024-11-20 07:12:38.530477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.431 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.729 [2024-11-20 07:12:38.837712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:56.729 /dev/nbd0 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.729 1+0 records in 00:15:56.729 1+0 records out 00:15:56.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517767 s, 7.9 MB/s 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:56.729 07:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:01.999 63488+0 records in 00:16:01.999 63488+0 records out 00:16:01.999 32505856 bytes (33 MB, 31 MiB) copied, 5.33908 s, 6.1 MB/s 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.999 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.259 [2024-11-20 07:12:44.479548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.259 [2024-11-20 07:12:44.515460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.259 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.518 "name": "raid_bdev1", 00:16:02.518 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:02.518 "strip_size_kb": 0, 00:16:02.518 "state": "online", 00:16:02.518 "raid_level": "raid1", 00:16:02.518 "superblock": true, 00:16:02.518 "num_base_bdevs": 2, 00:16:02.518 "num_base_bdevs_discovered": 1, 00:16:02.518 "num_base_bdevs_operational": 1, 00:16:02.518 "base_bdevs_list": [ 00:16:02.518 { 00:16:02.518 "name": null, 00:16:02.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.518 "is_configured": false, 00:16:02.518 "data_offset": 0, 00:16:02.518 "data_size": 63488 00:16:02.518 }, 00:16:02.518 { 00:16:02.518 "name": "BaseBdev2", 00:16:02.518 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:02.518 "is_configured": true, 00:16:02.518 "data_offset": 2048, 00:16:02.518 "data_size": 63488 00:16:02.518 } 00:16:02.518 ] 00:16:02.518 }' 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.518 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.777 07:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.777 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.777 07:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.777 [2024-11-20 07:12:45.006639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.777 [2024-11-20 07:12:45.024932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:02.777 07:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.777 [2024-11-20 07:12:45.027222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.777 07:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.156 "name": "raid_bdev1", 00:16:04.156 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:04.156 "strip_size_kb": 0, 00:16:04.156 "state": "online", 00:16:04.156 "raid_level": "raid1", 00:16:04.156 "superblock": true, 00:16:04.156 "num_base_bdevs": 2, 00:16:04.156 "num_base_bdevs_discovered": 2, 00:16:04.156 "num_base_bdevs_operational": 2, 00:16:04.156 "process": { 00:16:04.156 "type": "rebuild", 00:16:04.156 "target": "spare", 00:16:04.156 "progress": { 00:16:04.156 "blocks": 20480, 00:16:04.156 "percent": 32 00:16:04.156 } 00:16:04.156 }, 00:16:04.156 "base_bdevs_list": [ 00:16:04.156 { 00:16:04.156 "name": "spare", 00:16:04.156 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:04.156 "is_configured": true, 00:16:04.156 "data_offset": 2048, 00:16:04.156 "data_size": 63488 00:16:04.156 }, 00:16:04.156 { 00:16:04.156 "name": "BaseBdev2", 00:16:04.156 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:04.156 "is_configured": true, 00:16:04.156 "data_offset": 2048, 00:16:04.156 "data_size": 63488 00:16:04.156 } 00:16:04.156 ] 00:16:04.156 }' 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 [2024-11-20 07:12:46.182450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.156 [2024-11-20 07:12:46.233557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.156 [2024-11-20 07:12:46.233761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.156 [2024-11-20 07:12:46.233783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.156 [2024-11-20 07:12:46.233798] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.156 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.157 "name": "raid_bdev1", 00:16:04.157 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:04.157 "strip_size_kb": 0, 00:16:04.157 "state": "online", 00:16:04.157 "raid_level": "raid1", 00:16:04.157 "superblock": true, 00:16:04.157 "num_base_bdevs": 2, 00:16:04.157 "num_base_bdevs_discovered": 1, 00:16:04.157 "num_base_bdevs_operational": 1, 00:16:04.157 "base_bdevs_list": [ 00:16:04.157 { 00:16:04.157 "name": null, 00:16:04.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.157 "is_configured": false, 00:16:04.157 "data_offset": 0, 00:16:04.157 "data_size": 63488 00:16:04.157 }, 00:16:04.157 { 00:16:04.157 "name": "BaseBdev2", 00:16:04.157 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:04.157 "is_configured": true, 00:16:04.157 "data_offset": 2048, 00:16:04.157 "data_size": 63488 00:16:04.157 } 00:16:04.157 ] 00:16:04.157 }' 00:16:04.157 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.157 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.725 "name": "raid_bdev1", 00:16:04.725 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:04.725 "strip_size_kb": 0, 00:16:04.725 "state": "online", 00:16:04.725 "raid_level": "raid1", 00:16:04.725 "superblock": true, 00:16:04.725 "num_base_bdevs": 2, 00:16:04.725 "num_base_bdevs_discovered": 1, 00:16:04.725 "num_base_bdevs_operational": 1, 00:16:04.725 "base_bdevs_list": [ 00:16:04.725 { 00:16:04.725 "name": null, 00:16:04.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.725 "is_configured": false, 00:16:04.725 "data_offset": 0, 00:16:04.725 "data_size": 63488 00:16:04.725 }, 00:16:04.725 { 00:16:04.725 "name": "BaseBdev2", 00:16:04.725 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:04.725 "is_configured": true, 00:16:04.725 "data_offset": 2048, 00:16:04.725 "data_size": 63488 00:16:04.725 } 00:16:04.725 ] 00:16:04.725 }' 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.725 [2024-11-20 07:12:46.880741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.725 [2024-11-20 07:12:46.900268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.725 07:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:04.725 [2024-11-20 07:12:46.902478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.714 "name": "raid_bdev1", 00:16:05.714 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:05.714 "strip_size_kb": 0, 00:16:05.714 "state": "online", 00:16:05.714 "raid_level": "raid1", 00:16:05.714 "superblock": true, 00:16:05.714 "num_base_bdevs": 2, 00:16:05.714 "num_base_bdevs_discovered": 2, 00:16:05.714 "num_base_bdevs_operational": 2, 00:16:05.714 "process": { 00:16:05.714 "type": "rebuild", 00:16:05.714 "target": "spare", 00:16:05.714 "progress": { 00:16:05.714 "blocks": 20480, 00:16:05.714 "percent": 32 00:16:05.714 } 00:16:05.714 }, 00:16:05.714 "base_bdevs_list": [ 00:16:05.714 { 00:16:05.714 "name": "spare", 00:16:05.714 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:05.714 "is_configured": true, 00:16:05.714 "data_offset": 2048, 00:16:05.714 "data_size": 63488 00:16:05.714 }, 00:16:05.714 { 00:16:05.714 "name": "BaseBdev2", 00:16:05.714 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:05.714 "is_configured": true, 00:16:05.714 "data_offset": 2048, 00:16:05.714 "data_size": 63488 00:16:05.714 } 00:16:05.714 ] 00:16:05.714 }' 00:16:05.714 07:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:05.974 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.974 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.974 "name": "raid_bdev1", 00:16:05.974 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:05.974 "strip_size_kb": 0, 00:16:05.974 "state": "online", 00:16:05.974 "raid_level": "raid1", 00:16:05.974 "superblock": true, 00:16:05.974 "num_base_bdevs": 2, 00:16:05.974 "num_base_bdevs_discovered": 2, 00:16:05.974 "num_base_bdevs_operational": 2, 00:16:05.974 "process": { 00:16:05.974 "type": "rebuild", 00:16:05.974 "target": "spare", 00:16:05.974 "progress": { 00:16:05.975 "blocks": 22528, 00:16:05.975 "percent": 35 00:16:05.975 } 00:16:05.975 }, 00:16:05.975 "base_bdevs_list": [ 00:16:05.975 { 00:16:05.975 "name": "spare", 00:16:05.975 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:05.975 "is_configured": true, 00:16:05.975 "data_offset": 2048, 00:16:05.975 "data_size": 63488 00:16:05.975 }, 00:16:05.975 { 00:16:05.975 "name": "BaseBdev2", 00:16:05.975 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:05.975 "is_configured": true, 00:16:05.975 "data_offset": 2048, 00:16:05.975 "data_size": 63488 00:16:05.975 } 00:16:05.975 ] 00:16:05.975 }' 00:16:05.975 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.975 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.975 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.975 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.975 07:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.353 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.354 "name": "raid_bdev1", 00:16:07.354 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:07.354 "strip_size_kb": 0, 00:16:07.354 "state": "online", 00:16:07.354 "raid_level": "raid1", 00:16:07.354 "superblock": true, 00:16:07.354 "num_base_bdevs": 2, 00:16:07.354 "num_base_bdevs_discovered": 2, 00:16:07.354 "num_base_bdevs_operational": 2, 00:16:07.354 "process": { 00:16:07.354 "type": "rebuild", 00:16:07.354 "target": "spare", 00:16:07.354 "progress": { 00:16:07.354 "blocks": 47104, 00:16:07.354 "percent": 74 00:16:07.354 } 00:16:07.354 }, 00:16:07.354 "base_bdevs_list": [ 00:16:07.354 { 00:16:07.354 "name": "spare", 00:16:07.354 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:07.354 "is_configured": true, 00:16:07.354 "data_offset": 2048, 00:16:07.354 "data_size": 63488 00:16:07.354 }, 00:16:07.354 { 00:16:07.354 "name": "BaseBdev2", 00:16:07.354 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:07.354 "is_configured": true, 00:16:07.354 "data_offset": 2048, 00:16:07.354 "data_size": 63488 00:16:07.354 } 00:16:07.354 ] 00:16:07.354 }' 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.354 07:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.919 [2024-11-20 07:12:50.017791] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:07.919 [2024-11-20 07:12:50.017972] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:07.919 [2024-11-20 07:12:50.018099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.176 "name": "raid_bdev1", 00:16:08.176 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:08.176 "strip_size_kb": 0, 00:16:08.176 "state": "online", 00:16:08.176 "raid_level": "raid1", 00:16:08.176 "superblock": true, 00:16:08.176 "num_base_bdevs": 2, 00:16:08.176 "num_base_bdevs_discovered": 2, 00:16:08.176 "num_base_bdevs_operational": 2, 00:16:08.176 "base_bdevs_list": [ 00:16:08.176 { 00:16:08.176 "name": "spare", 00:16:08.176 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:08.176 "is_configured": true, 00:16:08.176 "data_offset": 2048, 00:16:08.176 "data_size": 63488 00:16:08.176 }, 00:16:08.176 { 00:16:08.176 "name": "BaseBdev2", 00:16:08.176 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:08.176 "is_configured": true, 00:16:08.176 "data_offset": 2048, 00:16:08.176 "data_size": 63488 00:16:08.176 } 00:16:08.176 ] 00:16:08.176 }' 00:16:08.176 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.435 "name": "raid_bdev1", 00:16:08.435 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:08.435 "strip_size_kb": 0, 00:16:08.435 "state": "online", 00:16:08.435 "raid_level": "raid1", 00:16:08.435 "superblock": true, 00:16:08.435 "num_base_bdevs": 2, 00:16:08.435 "num_base_bdevs_discovered": 2, 00:16:08.435 "num_base_bdevs_operational": 2, 00:16:08.435 "base_bdevs_list": [ 00:16:08.435 { 00:16:08.435 "name": "spare", 00:16:08.435 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:08.435 "is_configured": true, 00:16:08.435 "data_offset": 2048, 00:16:08.435 "data_size": 63488 00:16:08.435 }, 00:16:08.435 { 00:16:08.435 "name": "BaseBdev2", 00:16:08.435 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:08.435 "is_configured": true, 00:16:08.435 "data_offset": 2048, 00:16:08.435 "data_size": 63488 00:16:08.435 } 00:16:08.435 ] 00:16:08.435 }' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.435 "name": "raid_bdev1", 00:16:08.435 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:08.435 "strip_size_kb": 0, 00:16:08.435 "state": "online", 00:16:08.435 "raid_level": "raid1", 00:16:08.435 "superblock": true, 00:16:08.435 "num_base_bdevs": 2, 00:16:08.435 "num_base_bdevs_discovered": 2, 00:16:08.435 "num_base_bdevs_operational": 2, 00:16:08.435 "base_bdevs_list": [ 00:16:08.435 { 00:16:08.435 "name": "spare", 00:16:08.435 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:08.435 "is_configured": true, 00:16:08.435 "data_offset": 2048, 00:16:08.435 "data_size": 63488 00:16:08.435 }, 00:16:08.435 { 00:16:08.435 "name": "BaseBdev2", 00:16:08.435 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:08.435 "is_configured": true, 00:16:08.435 "data_offset": 2048, 00:16:08.435 "data_size": 63488 00:16:08.435 } 00:16:08.435 ] 00:16:08.435 }' 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.435 07:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.004 [2024-11-20 07:12:51.126368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.004 [2024-11-20 07:12:51.126478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.004 [2024-11-20 07:12:51.126601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.004 [2024-11-20 07:12:51.126696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.004 [2024-11-20 07:12:51.126759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.004 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:09.262 /dev/nbd0 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.262 1+0 records in 00:16:09.262 1+0 records out 00:16:09.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436909 s, 9.4 MB/s 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.262 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:09.519 /dev/nbd1 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.519 1+0 records in 00:16:09.519 1+0 records out 00:16:09.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402728 s, 10.2 MB/s 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.519 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.777 07:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.033 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.290 [2024-11-20 07:12:52.457263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.290 [2024-11-20 07:12:52.457341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.290 [2024-11-20 07:12:52.457370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:10.290 [2024-11-20 07:12:52.457381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.290 [2024-11-20 07:12:52.459930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.290 [2024-11-20 07:12:52.460029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.290 [2024-11-20 07:12:52.460150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.290 [2024-11-20 07:12:52.460217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.290 [2024-11-20 07:12:52.460403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.290 spare 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.290 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.548 [2024-11-20 07:12:52.560328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:10.549 [2024-11-20 07:12:52.560407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.549 [2024-11-20 07:12:52.560784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:10.549 [2024-11-20 07:12:52.561007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:10.549 [2024-11-20 07:12:52.561022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:10.549 [2024-11-20 07:12:52.561251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.549 "name": "raid_bdev1", 00:16:10.549 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:10.549 "strip_size_kb": 0, 00:16:10.549 "state": "online", 00:16:10.549 "raid_level": "raid1", 00:16:10.549 "superblock": true, 00:16:10.549 "num_base_bdevs": 2, 00:16:10.549 "num_base_bdevs_discovered": 2, 00:16:10.549 "num_base_bdevs_operational": 2, 00:16:10.549 "base_bdevs_list": [ 00:16:10.549 { 00:16:10.549 "name": "spare", 00:16:10.549 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:10.549 "is_configured": true, 00:16:10.549 "data_offset": 2048, 00:16:10.549 "data_size": 63488 00:16:10.549 }, 00:16:10.549 { 00:16:10.549 "name": "BaseBdev2", 00:16:10.549 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:10.549 "is_configured": true, 00:16:10.549 "data_offset": 2048, 00:16:10.549 "data_size": 63488 00:16:10.549 } 00:16:10.549 ] 00:16:10.549 }' 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.549 07:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.065 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.065 "name": "raid_bdev1", 00:16:11.065 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:11.065 "strip_size_kb": 0, 00:16:11.065 "state": "online", 00:16:11.065 "raid_level": "raid1", 00:16:11.065 "superblock": true, 00:16:11.065 "num_base_bdevs": 2, 00:16:11.065 "num_base_bdevs_discovered": 2, 00:16:11.065 "num_base_bdevs_operational": 2, 00:16:11.065 "base_bdevs_list": [ 00:16:11.065 { 00:16:11.065 "name": "spare", 00:16:11.065 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:11.065 "is_configured": true, 00:16:11.065 "data_offset": 2048, 00:16:11.065 "data_size": 63488 00:16:11.065 }, 00:16:11.065 { 00:16:11.065 "name": "BaseBdev2", 00:16:11.065 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:11.065 "is_configured": true, 00:16:11.065 "data_offset": 2048, 00:16:11.065 "data_size": 63488 00:16:11.065 } 00:16:11.065 ] 00:16:11.065 }' 00:16:11.065 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.065 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.065 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.065 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.066 [2024-11-20 07:12:53.208188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.066 "name": "raid_bdev1", 00:16:11.066 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:11.066 "strip_size_kb": 0, 00:16:11.066 "state": "online", 00:16:11.066 "raid_level": "raid1", 00:16:11.066 "superblock": true, 00:16:11.066 "num_base_bdevs": 2, 00:16:11.066 "num_base_bdevs_discovered": 1, 00:16:11.066 "num_base_bdevs_operational": 1, 00:16:11.066 "base_bdevs_list": [ 00:16:11.066 { 00:16:11.066 "name": null, 00:16:11.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.066 "is_configured": false, 00:16:11.066 "data_offset": 0, 00:16:11.066 "data_size": 63488 00:16:11.066 }, 00:16:11.066 { 00:16:11.066 "name": "BaseBdev2", 00:16:11.066 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:11.066 "is_configured": true, 00:16:11.066 "data_offset": 2048, 00:16:11.066 "data_size": 63488 00:16:11.066 } 00:16:11.066 ] 00:16:11.066 }' 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.066 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.631 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.631 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.631 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.631 [2024-11-20 07:12:53.667509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.631 [2024-11-20 07:12:53.667784] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.631 [2024-11-20 07:12:53.667862] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.631 [2024-11-20 07:12:53.667966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.631 [2024-11-20 07:12:53.686126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:11.631 07:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.631 [2024-11-20 07:12:53.688263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.631 07:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.601 "name": "raid_bdev1", 00:16:12.601 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:12.601 "strip_size_kb": 0, 00:16:12.601 "state": "online", 00:16:12.601 "raid_level": "raid1", 00:16:12.601 "superblock": true, 00:16:12.601 "num_base_bdevs": 2, 00:16:12.601 "num_base_bdevs_discovered": 2, 00:16:12.601 "num_base_bdevs_operational": 2, 00:16:12.601 "process": { 00:16:12.601 "type": "rebuild", 00:16:12.601 "target": "spare", 00:16:12.601 "progress": { 00:16:12.601 "blocks": 20480, 00:16:12.601 "percent": 32 00:16:12.601 } 00:16:12.601 }, 00:16:12.601 "base_bdevs_list": [ 00:16:12.601 { 00:16:12.601 "name": "spare", 00:16:12.601 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:12.601 "is_configured": true, 00:16:12.601 "data_offset": 2048, 00:16:12.601 "data_size": 63488 00:16:12.601 }, 00:16:12.601 { 00:16:12.601 "name": "BaseBdev2", 00:16:12.601 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:12.601 "is_configured": true, 00:16:12.601 "data_offset": 2048, 00:16:12.601 "data_size": 63488 00:16:12.601 } 00:16:12.601 ] 00:16:12.601 }' 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.601 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.601 [2024-11-20 07:12:54.840067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.880 [2024-11-20 07:12:54.894400] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.880 [2024-11-20 07:12:54.894491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.880 [2024-11-20 07:12:54.894509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.880 [2024-11-20 07:12:54.894521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.880 "name": "raid_bdev1", 00:16:12.880 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:12.880 "strip_size_kb": 0, 00:16:12.880 "state": "online", 00:16:12.880 "raid_level": "raid1", 00:16:12.880 "superblock": true, 00:16:12.880 "num_base_bdevs": 2, 00:16:12.880 "num_base_bdevs_discovered": 1, 00:16:12.880 "num_base_bdevs_operational": 1, 00:16:12.880 "base_bdevs_list": [ 00:16:12.880 { 00:16:12.880 "name": null, 00:16:12.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.880 "is_configured": false, 00:16:12.880 "data_offset": 0, 00:16:12.880 "data_size": 63488 00:16:12.880 }, 00:16:12.880 { 00:16:12.880 "name": "BaseBdev2", 00:16:12.880 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:12.880 "is_configured": true, 00:16:12.880 "data_offset": 2048, 00:16:12.880 "data_size": 63488 00:16:12.880 } 00:16:12.880 ] 00:16:12.880 }' 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.880 07:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.448 07:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.448 07:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.448 07:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.448 [2024-11-20 07:12:55.432889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:13.448 [2024-11-20 07:12:55.433051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.448 [2024-11-20 07:12:55.433099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:13.448 [2024-11-20 07:12:55.433136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.448 [2024-11-20 07:12:55.433702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.448 [2024-11-20 07:12:55.433774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:13.448 [2024-11-20 07:12:55.433923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:13.448 [2024-11-20 07:12:55.433975] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.448 [2024-11-20 07:12:55.434024] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.448 [2024-11-20 07:12:55.434121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.448 [2024-11-20 07:12:55.453015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:13.448 spare 00:16:13.448 07:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.448 07:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:13.448 [2024-11-20 07:12:55.455174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.383 "name": "raid_bdev1", 00:16:14.383 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:14.383 "strip_size_kb": 0, 00:16:14.383 "state": "online", 00:16:14.383 "raid_level": "raid1", 00:16:14.383 "superblock": true, 00:16:14.383 "num_base_bdevs": 2, 00:16:14.383 "num_base_bdevs_discovered": 2, 00:16:14.383 "num_base_bdevs_operational": 2, 00:16:14.383 "process": { 00:16:14.383 "type": "rebuild", 00:16:14.383 "target": "spare", 00:16:14.383 "progress": { 00:16:14.383 "blocks": 20480, 00:16:14.383 "percent": 32 00:16:14.383 } 00:16:14.383 }, 00:16:14.383 "base_bdevs_list": [ 00:16:14.383 { 00:16:14.383 "name": "spare", 00:16:14.383 "uuid": "4d21936b-0630-5c3b-8c3c-16738ea00c87", 00:16:14.383 "is_configured": true, 00:16:14.383 "data_offset": 2048, 00:16:14.383 "data_size": 63488 00:16:14.383 }, 00:16:14.383 { 00:16:14.383 "name": "BaseBdev2", 00:16:14.383 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:14.383 "is_configured": true, 00:16:14.383 "data_offset": 2048, 00:16:14.383 "data_size": 63488 00:16:14.383 } 00:16:14.383 ] 00:16:14.383 }' 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.383 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.383 [2024-11-20 07:12:56.602287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.642 [2024-11-20 07:12:56.661176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.642 [2024-11-20 07:12:56.661320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.642 [2024-11-20 07:12:56.661360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.642 [2024-11-20 07:12:56.661370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.642 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.642 "name": "raid_bdev1", 00:16:14.642 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:14.642 "strip_size_kb": 0, 00:16:14.643 "state": "online", 00:16:14.643 "raid_level": "raid1", 00:16:14.643 "superblock": true, 00:16:14.643 "num_base_bdevs": 2, 00:16:14.643 "num_base_bdevs_discovered": 1, 00:16:14.643 "num_base_bdevs_operational": 1, 00:16:14.643 "base_bdevs_list": [ 00:16:14.643 { 00:16:14.643 "name": null, 00:16:14.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.643 "is_configured": false, 00:16:14.643 "data_offset": 0, 00:16:14.643 "data_size": 63488 00:16:14.643 }, 00:16:14.643 { 00:16:14.643 "name": "BaseBdev2", 00:16:14.643 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:14.643 "is_configured": true, 00:16:14.643 "data_offset": 2048, 00:16:14.643 "data_size": 63488 00:16:14.643 } 00:16:14.643 ] 00:16:14.643 }' 00:16:14.643 07:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.643 07:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.901 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.160 "name": "raid_bdev1", 00:16:15.160 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:15.160 "strip_size_kb": 0, 00:16:15.160 "state": "online", 00:16:15.160 "raid_level": "raid1", 00:16:15.160 "superblock": true, 00:16:15.160 "num_base_bdevs": 2, 00:16:15.160 "num_base_bdevs_discovered": 1, 00:16:15.160 "num_base_bdevs_operational": 1, 00:16:15.160 "base_bdevs_list": [ 00:16:15.160 { 00:16:15.160 "name": null, 00:16:15.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.160 "is_configured": false, 00:16:15.160 "data_offset": 0, 00:16:15.160 "data_size": 63488 00:16:15.160 }, 00:16:15.160 { 00:16:15.160 "name": "BaseBdev2", 00:16:15.160 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:15.160 "is_configured": true, 00:16:15.160 "data_offset": 2048, 00:16:15.160 "data_size": 63488 00:16:15.160 } 00:16:15.160 ] 00:16:15.160 }' 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.160 [2024-11-20 07:12:57.270343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.160 [2024-11-20 07:12:57.270449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.160 [2024-11-20 07:12:57.270495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:15.160 [2024-11-20 07:12:57.270563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.160 [2024-11-20 07:12:57.271054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.160 [2024-11-20 07:12:57.271074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.160 [2024-11-20 07:12:57.271165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:15.160 [2024-11-20 07:12:57.271181] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.160 [2024-11-20 07:12:57.271191] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:15.160 [2024-11-20 07:12:57.271203] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:15.160 BaseBdev1 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.160 07:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.119 "name": "raid_bdev1", 00:16:16.119 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:16.119 "strip_size_kb": 0, 00:16:16.119 "state": "online", 00:16:16.119 "raid_level": "raid1", 00:16:16.119 "superblock": true, 00:16:16.119 "num_base_bdevs": 2, 00:16:16.119 "num_base_bdevs_discovered": 1, 00:16:16.119 "num_base_bdevs_operational": 1, 00:16:16.119 "base_bdevs_list": [ 00:16:16.119 { 00:16:16.119 "name": null, 00:16:16.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.119 "is_configured": false, 00:16:16.119 "data_offset": 0, 00:16:16.119 "data_size": 63488 00:16:16.119 }, 00:16:16.119 { 00:16:16.119 "name": "BaseBdev2", 00:16:16.119 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:16.119 "is_configured": true, 00:16:16.119 "data_offset": 2048, 00:16:16.119 "data_size": 63488 00:16:16.119 } 00:16:16.119 ] 00:16:16.119 }' 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.119 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.687 "name": "raid_bdev1", 00:16:16.687 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:16.687 "strip_size_kb": 0, 00:16:16.687 "state": "online", 00:16:16.687 "raid_level": "raid1", 00:16:16.687 "superblock": true, 00:16:16.687 "num_base_bdevs": 2, 00:16:16.687 "num_base_bdevs_discovered": 1, 00:16:16.687 "num_base_bdevs_operational": 1, 00:16:16.687 "base_bdevs_list": [ 00:16:16.687 { 00:16:16.687 "name": null, 00:16:16.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.687 "is_configured": false, 00:16:16.687 "data_offset": 0, 00:16:16.687 "data_size": 63488 00:16:16.687 }, 00:16:16.687 { 00:16:16.687 "name": "BaseBdev2", 00:16:16.687 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:16.687 "is_configured": true, 00:16:16.687 "data_offset": 2048, 00:16:16.687 "data_size": 63488 00:16:16.687 } 00:16:16.687 ] 00:16:16.687 }' 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.687 [2024-11-20 07:12:58.855808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.687 [2024-11-20 07:12:58.855989] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:16.687 [2024-11-20 07:12:58.856007] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:16.687 request: 00:16:16.687 { 00:16:16.687 "base_bdev": "BaseBdev1", 00:16:16.687 "raid_bdev": "raid_bdev1", 00:16:16.687 "method": "bdev_raid_add_base_bdev", 00:16:16.687 "req_id": 1 00:16:16.687 } 00:16:16.687 Got JSON-RPC error response 00:16:16.687 response: 00:16:16.687 { 00:16:16.687 "code": -22, 00:16:16.687 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:16.687 } 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.687 07:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.648 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 07:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.906 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.906 "name": "raid_bdev1", 00:16:17.906 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:17.906 "strip_size_kb": 0, 00:16:17.906 "state": "online", 00:16:17.906 "raid_level": "raid1", 00:16:17.906 "superblock": true, 00:16:17.906 "num_base_bdevs": 2, 00:16:17.906 "num_base_bdevs_discovered": 1, 00:16:17.906 "num_base_bdevs_operational": 1, 00:16:17.906 "base_bdevs_list": [ 00:16:17.906 { 00:16:17.906 "name": null, 00:16:17.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.906 "is_configured": false, 00:16:17.906 "data_offset": 0, 00:16:17.907 "data_size": 63488 00:16:17.907 }, 00:16:17.907 { 00:16:17.907 "name": "BaseBdev2", 00:16:17.907 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:17.907 "is_configured": true, 00:16:17.907 "data_offset": 2048, 00:16:17.907 "data_size": 63488 00:16:17.907 } 00:16:17.907 ] 00:16:17.907 }' 00:16:17.907 07:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.907 07:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.165 "name": "raid_bdev1", 00:16:18.165 "uuid": "7e83cd33-53b0-4ea8-9f64-8cb5a8b2b4c0", 00:16:18.165 "strip_size_kb": 0, 00:16:18.165 "state": "online", 00:16:18.165 "raid_level": "raid1", 00:16:18.165 "superblock": true, 00:16:18.165 "num_base_bdevs": 2, 00:16:18.165 "num_base_bdevs_discovered": 1, 00:16:18.165 "num_base_bdevs_operational": 1, 00:16:18.165 "base_bdevs_list": [ 00:16:18.165 { 00:16:18.165 "name": null, 00:16:18.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.165 "is_configured": false, 00:16:18.165 "data_offset": 0, 00:16:18.165 "data_size": 63488 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "name": "BaseBdev2", 00:16:18.165 "uuid": "ddc9a8f2-263a-5953-a573-5fc33005c696", 00:16:18.165 "is_configured": true, 00:16:18.165 "data_offset": 2048, 00:16:18.165 "data_size": 63488 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 }' 00:16:18.165 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76105 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76105 ']' 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76105 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.423 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76105 00:16:18.424 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.424 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.424 killing process with pid 76105 00:16:18.424 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.424 00:16:18.424 Latency(us) 00:16:18.424 [2024-11-20T07:13:00.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.424 [2024-11-20T07:13:00.689Z] =================================================================================================================== 00:16:18.424 [2024-11-20T07:13:00.689Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.424 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76105' 00:16:18.424 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76105 00:16:18.424 [2024-11-20 07:13:00.521068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.424 [2024-11-20 07:13:00.521212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.424 07:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76105 00:16:18.424 [2024-11-20 07:13:00.521271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.424 [2024-11-20 07:13:00.521286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:18.683 [2024-11-20 07:13:00.858398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.058 07:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:20.058 00:16:20.058 real 0m25.317s 00:16:20.058 user 0m29.927s 00:16:20.058 sys 0m4.445s 00:16:20.058 ************************************ 00:16:20.058 END TEST raid_rebuild_test_sb 00:16:20.058 ************************************ 00:16:20.058 07:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.058 07:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.058 07:13:02 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:20.058 07:13:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:20.059 07:13:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.059 07:13:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.059 ************************************ 00:16:20.059 START TEST raid_rebuild_test_io 00:16:20.059 ************************************ 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76855 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76855 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76855 ']' 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.059 07:13:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.059 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:20.059 Zero copy mechanism will not be used. 00:16:20.059 [2024-11-20 07:13:02.240202] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:20.059 [2024-11-20 07:13:02.240321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76855 ] 00:16:20.317 [2024-11-20 07:13:02.418575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.317 [2024-11-20 07:13:02.540786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.575 [2024-11-20 07:13:02.760168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.575 [2024-11-20 07:13:02.760241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 BaseBdev1_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 [2024-11-20 07:13:03.181473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.144 [2024-11-20 07:13:03.181620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.144 [2024-11-20 07:13:03.181672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:21.144 [2024-11-20 07:13:03.181706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.144 [2024-11-20 07:13:03.183963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.144 [2024-11-20 07:13:03.184049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.144 BaseBdev1 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 BaseBdev2_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 [2024-11-20 07:13:03.239060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:21.144 [2024-11-20 07:13:03.239175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.144 [2024-11-20 07:13:03.239225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.144 [2024-11-20 07:13:03.239263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.144 [2024-11-20 07:13:03.241589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.144 [2024-11-20 07:13:03.241676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:21.144 BaseBdev2 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 spare_malloc 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 spare_delay 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.144 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 [2024-11-20 07:13:03.323412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.144 [2024-11-20 07:13:03.323483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.145 [2024-11-20 07:13:03.323504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:21.145 [2024-11-20 07:13:03.323514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.145 [2024-11-20 07:13:03.325823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.145 [2024-11-20 07:13:03.325868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.145 spare 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.145 [2024-11-20 07:13:03.335450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.145 [2024-11-20 07:13:03.337396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.145 [2024-11-20 07:13:03.337493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.145 [2024-11-20 07:13:03.337509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:21.145 [2024-11-20 07:13:03.337776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:21.145 [2024-11-20 07:13:03.337940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.145 [2024-11-20 07:13:03.337952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:21.145 [2024-11-20 07:13:03.338120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.145 "name": "raid_bdev1", 00:16:21.145 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:21.145 "strip_size_kb": 0, 00:16:21.145 "state": "online", 00:16:21.145 "raid_level": "raid1", 00:16:21.145 "superblock": false, 00:16:21.145 "num_base_bdevs": 2, 00:16:21.145 "num_base_bdevs_discovered": 2, 00:16:21.145 "num_base_bdevs_operational": 2, 00:16:21.145 "base_bdevs_list": [ 00:16:21.145 { 00:16:21.145 "name": "BaseBdev1", 00:16:21.145 "uuid": "d9893c57-10f5-559a-8d0a-d8f7ac63e169", 00:16:21.145 "is_configured": true, 00:16:21.145 "data_offset": 0, 00:16:21.145 "data_size": 65536 00:16:21.145 }, 00:16:21.145 { 00:16:21.145 "name": "BaseBdev2", 00:16:21.145 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:21.145 "is_configured": true, 00:16:21.145 "data_offset": 0, 00:16:21.145 "data_size": 65536 00:16:21.145 } 00:16:21.145 ] 00:16:21.145 }' 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.145 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.714 [2024-11-20 07:13:03.806927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.714 [2024-11-20 07:13:03.902502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.714 "name": "raid_bdev1", 00:16:21.714 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:21.714 "strip_size_kb": 0, 00:16:21.714 "state": "online", 00:16:21.714 "raid_level": "raid1", 00:16:21.714 "superblock": false, 00:16:21.714 "num_base_bdevs": 2, 00:16:21.714 "num_base_bdevs_discovered": 1, 00:16:21.714 "num_base_bdevs_operational": 1, 00:16:21.714 "base_bdevs_list": [ 00:16:21.714 { 00:16:21.714 "name": null, 00:16:21.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.714 "is_configured": false, 00:16:21.714 "data_offset": 0, 00:16:21.714 "data_size": 65536 00:16:21.714 }, 00:16:21.714 { 00:16:21.714 "name": "BaseBdev2", 00:16:21.714 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:21.714 "is_configured": true, 00:16:21.714 "data_offset": 0, 00:16:21.714 "data_size": 65536 00:16:21.714 } 00:16:21.714 ] 00:16:21.714 }' 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.714 07:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.973 [2024-11-20 07:13:04.019143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:21.973 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.973 Zero copy mechanism will not be used. 00:16:21.973 Running I/O for 60 seconds... 00:16:22.281 07:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.281 07:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.281 07:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.281 [2024-11-20 07:13:04.363318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.281 07:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.281 07:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:22.281 [2024-11-20 07:13:04.428128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:22.281 [2024-11-20 07:13:04.430276] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.540 [2024-11-20 07:13:04.555786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:22.540 [2024-11-20 07:13:04.774237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:22.540 [2024-11-20 07:13:04.774735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:23.057 203.00 IOPS, 609.00 MiB/s [2024-11-20T07:13:05.322Z] [2024-11-20 07:13:05.111185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:23.057 [2024-11-20 07:13:05.111925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:23.316 [2024-11-20 07:13:05.351451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.316 "name": "raid_bdev1", 00:16:23.316 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:23.316 "strip_size_kb": 0, 00:16:23.316 "state": "online", 00:16:23.316 "raid_level": "raid1", 00:16:23.316 "superblock": false, 00:16:23.316 "num_base_bdevs": 2, 00:16:23.316 "num_base_bdevs_discovered": 2, 00:16:23.316 "num_base_bdevs_operational": 2, 00:16:23.316 "process": { 00:16:23.316 "type": "rebuild", 00:16:23.316 "target": "spare", 00:16:23.316 "progress": { 00:16:23.316 "blocks": 10240, 00:16:23.316 "percent": 15 00:16:23.316 } 00:16:23.316 }, 00:16:23.316 "base_bdevs_list": [ 00:16:23.316 { 00:16:23.316 "name": "spare", 00:16:23.316 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:23.316 "is_configured": true, 00:16:23.316 "data_offset": 0, 00:16:23.316 "data_size": 65536 00:16:23.316 }, 00:16:23.316 { 00:16:23.316 "name": "BaseBdev2", 00:16:23.316 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:23.316 "is_configured": true, 00:16:23.316 "data_offset": 0, 00:16:23.316 "data_size": 65536 00:16:23.316 } 00:16:23.316 ] 00:16:23.316 }' 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.316 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 [2024-11-20 07:13:05.577996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.575 [2024-11-20 07:13:05.598161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:23.575 [2024-11-20 07:13:05.606976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.575 [2024-11-20 07:13:05.616502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.575 [2024-11-20 07:13:05.616609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.575 [2024-11-20 07:13:05.616644] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.575 [2024-11-20 07:13:05.676030] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.575 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.575 "name": "raid_bdev1", 00:16:23.576 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:23.576 "strip_size_kb": 0, 00:16:23.576 "state": "online", 00:16:23.576 "raid_level": "raid1", 00:16:23.576 "superblock": false, 00:16:23.576 "num_base_bdevs": 2, 00:16:23.576 "num_base_bdevs_discovered": 1, 00:16:23.576 "num_base_bdevs_operational": 1, 00:16:23.576 "base_bdevs_list": [ 00:16:23.576 { 00:16:23.576 "name": null, 00:16:23.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.576 "is_configured": false, 00:16:23.576 "data_offset": 0, 00:16:23.576 "data_size": 65536 00:16:23.576 }, 00:16:23.576 { 00:16:23.576 "name": "BaseBdev2", 00:16:23.576 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:23.576 "is_configured": true, 00:16:23.576 "data_offset": 0, 00:16:23.576 "data_size": 65536 00:16:23.576 } 00:16:23.576 ] 00:16:23.576 }' 00:16:23.576 07:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.576 07:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 178.50 IOPS, 535.50 MiB/s [2024-11-20T07:13:06.358Z] 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.093 "name": "raid_bdev1", 00:16:24.093 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:24.093 "strip_size_kb": 0, 00:16:24.093 "state": "online", 00:16:24.093 "raid_level": "raid1", 00:16:24.093 "superblock": false, 00:16:24.093 "num_base_bdevs": 2, 00:16:24.093 "num_base_bdevs_discovered": 1, 00:16:24.093 "num_base_bdevs_operational": 1, 00:16:24.093 "base_bdevs_list": [ 00:16:24.093 { 00:16:24.093 "name": null, 00:16:24.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.093 "is_configured": false, 00:16:24.093 "data_offset": 0, 00:16:24.093 "data_size": 65536 00:16:24.093 }, 00:16:24.093 { 00:16:24.093 "name": "BaseBdev2", 00:16:24.093 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:24.093 "is_configured": true, 00:16:24.093 "data_offset": 0, 00:16:24.093 "data_size": 65536 00:16:24.093 } 00:16:24.093 ] 00:16:24.093 }' 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 [2024-11-20 07:13:06.277921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.093 07:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:24.351 [2024-11-20 07:13:06.367592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:24.351 [2024-11-20 07:13:06.369855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.609 [2024-11-20 07:13:06.618763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.609 [2024-11-20 07:13:06.619173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.867 [2024-11-20 07:13:06.948365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:24.867 [2024-11-20 07:13:06.949039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:25.126 163.33 IOPS, 490.00 MiB/s [2024-11-20T07:13:07.391Z] [2024-11-20 07:13:07.167065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:25.126 [2024-11-20 07:13:07.167546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.126 "name": "raid_bdev1", 00:16:25.126 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:25.126 "strip_size_kb": 0, 00:16:25.126 "state": "online", 00:16:25.126 "raid_level": "raid1", 00:16:25.126 "superblock": false, 00:16:25.126 "num_base_bdevs": 2, 00:16:25.126 "num_base_bdevs_discovered": 2, 00:16:25.126 "num_base_bdevs_operational": 2, 00:16:25.126 "process": { 00:16:25.126 "type": "rebuild", 00:16:25.126 "target": "spare", 00:16:25.126 "progress": { 00:16:25.126 "blocks": 10240, 00:16:25.126 "percent": 15 00:16:25.126 } 00:16:25.126 }, 00:16:25.126 "base_bdevs_list": [ 00:16:25.126 { 00:16:25.126 "name": "spare", 00:16:25.126 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 0, 00:16:25.126 "data_size": 65536 00:16:25.126 }, 00:16:25.126 { 00:16:25.126 "name": "BaseBdev2", 00:16:25.126 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 0, 00:16:25.126 "data_size": 65536 00:16:25.126 } 00:16:25.126 ] 00:16:25.126 }' 00:16:25.126 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.385 [2024-11-20 07:13:07.515466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.385 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.385 "name": "raid_bdev1", 00:16:25.385 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:25.385 "strip_size_kb": 0, 00:16:25.385 "state": "online", 00:16:25.385 "raid_level": "raid1", 00:16:25.385 "superblock": false, 00:16:25.385 "num_base_bdevs": 2, 00:16:25.385 "num_base_bdevs_discovered": 2, 00:16:25.385 "num_base_bdevs_operational": 2, 00:16:25.385 "process": { 00:16:25.385 "type": "rebuild", 00:16:25.385 "target": "spare", 00:16:25.385 "progress": { 00:16:25.385 "blocks": 12288, 00:16:25.385 "percent": 18 00:16:25.385 } 00:16:25.385 }, 00:16:25.385 "base_bdevs_list": [ 00:16:25.385 { 00:16:25.385 "name": "spare", 00:16:25.385 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:25.385 "is_configured": true, 00:16:25.385 "data_offset": 0, 00:16:25.385 "data_size": 65536 00:16:25.385 }, 00:16:25.385 { 00:16:25.385 "name": "BaseBdev2", 00:16:25.385 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:25.385 "is_configured": true, 00:16:25.385 "data_offset": 0, 00:16:25.385 "data_size": 65536 00:16:25.385 } 00:16:25.385 ] 00:16:25.385 }' 00:16:25.386 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.386 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.386 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.386 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.386 [2024-11-20 07:13:07.640612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:25.386 07:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.386 [2024-11-20 07:13:07.641081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:25.966 136.25 IOPS, 408.75 MiB/s [2024-11-20T07:13:08.231Z] [2024-11-20 07:13:08.126083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:26.532 [2024-11-20 07:13:08.511278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.532 "name": "raid_bdev1", 00:16:26.532 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:26.532 "strip_size_kb": 0, 00:16:26.532 "state": "online", 00:16:26.532 "raid_level": "raid1", 00:16:26.532 "superblock": false, 00:16:26.532 "num_base_bdevs": 2, 00:16:26.532 "num_base_bdevs_discovered": 2, 00:16:26.532 "num_base_bdevs_operational": 2, 00:16:26.532 "process": { 00:16:26.532 "type": "rebuild", 00:16:26.532 "target": "spare", 00:16:26.532 "progress": { 00:16:26.532 "blocks": 26624, 00:16:26.532 "percent": 40 00:16:26.532 } 00:16:26.532 }, 00:16:26.532 "base_bdevs_list": [ 00:16:26.532 { 00:16:26.532 "name": "spare", 00:16:26.532 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:26.532 "is_configured": true, 00:16:26.532 "data_offset": 0, 00:16:26.532 "data_size": 65536 00:16:26.532 }, 00:16:26.532 { 00:16:26.532 "name": "BaseBdev2", 00:16:26.532 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:26.532 "is_configured": true, 00:16:26.532 "data_offset": 0, 00:16:26.532 "data_size": 65536 00:16:26.532 } 00:16:26.532 ] 00:16:26.532 }' 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.532 [2024-11-20 07:13:08.744348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.532 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.791 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.791 07:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.791 [2024-11-20 07:13:08.978857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:27.050 118.60 IOPS, 355.80 MiB/s [2024-11-20T07:13:09.315Z] [2024-11-20 07:13:09.092123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:27.616 [2024-11-20 07:13:09.781437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.616 "name": "raid_bdev1", 00:16:27.616 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:27.616 "strip_size_kb": 0, 00:16:27.616 "state": "online", 00:16:27.616 "raid_level": "raid1", 00:16:27.616 "superblock": false, 00:16:27.616 "num_base_bdevs": 2, 00:16:27.616 "num_base_bdevs_discovered": 2, 00:16:27.616 "num_base_bdevs_operational": 2, 00:16:27.616 "process": { 00:16:27.616 "type": "rebuild", 00:16:27.616 "target": "spare", 00:16:27.616 "progress": { 00:16:27.616 "blocks": 47104, 00:16:27.616 "percent": 71 00:16:27.616 } 00:16:27.616 }, 00:16:27.616 "base_bdevs_list": [ 00:16:27.616 { 00:16:27.616 "name": "spare", 00:16:27.616 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:27.616 "is_configured": true, 00:16:27.616 "data_offset": 0, 00:16:27.616 "data_size": 65536 00:16:27.616 }, 00:16:27.616 { 00:16:27.616 "name": "BaseBdev2", 00:16:27.616 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:27.616 "is_configured": true, 00:16:27.616 "data_offset": 0, 00:16:27.616 "data_size": 65536 00:16:27.616 } 00:16:27.616 ] 00:16:27.616 }' 00:16:27.616 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.875 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.875 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.875 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.875 07:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.808 107.33 IOPS, 322.00 MiB/s [2024-11-20T07:13:11.073Z] [2024-11-20 07:13:10.907293] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.808 07:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.808 [2024-11-20 07:13:11.014209] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:28.808 [2024-11-20 07:13:11.016841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.808 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.809 "name": "raid_bdev1", 00:16:28.809 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:28.809 "strip_size_kb": 0, 00:16:28.809 "state": "online", 00:16:28.809 "raid_level": "raid1", 00:16:28.809 "superblock": false, 00:16:28.809 "num_base_bdevs": 2, 00:16:28.809 "num_base_bdevs_discovered": 2, 00:16:28.809 "num_base_bdevs_operational": 2, 00:16:28.809 "process": { 00:16:28.809 "type": "rebuild", 00:16:28.809 "target": "spare", 00:16:28.809 "progress": { 00:16:28.809 "blocks": 65536, 00:16:28.809 "percent": 100 00:16:28.809 } 00:16:28.809 }, 00:16:28.809 "base_bdevs_list": [ 00:16:28.809 { 00:16:28.809 "name": "spare", 00:16:28.809 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 0, 00:16:28.809 "data_size": 65536 00:16:28.809 }, 00:16:28.809 { 00:16:28.809 "name": "BaseBdev2", 00:16:28.809 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 0, 00:16:28.809 "data_size": 65536 00:16:28.809 } 00:16:28.809 ] 00:16:28.809 }' 00:16:28.809 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.809 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.809 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.066 96.00 IOPS, 288.00 MiB/s [2024-11-20T07:13:11.331Z] 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.066 07:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.000 89.38 IOPS, 268.12 MiB/s [2024-11-20T07:13:12.265Z] 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.000 "name": "raid_bdev1", 00:16:30.000 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:30.000 "strip_size_kb": 0, 00:16:30.000 "state": "online", 00:16:30.000 "raid_level": "raid1", 00:16:30.000 "superblock": false, 00:16:30.000 "num_base_bdevs": 2, 00:16:30.000 "num_base_bdevs_discovered": 2, 00:16:30.000 "num_base_bdevs_operational": 2, 00:16:30.000 "base_bdevs_list": [ 00:16:30.000 { 00:16:30.000 "name": "spare", 00:16:30.000 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:30.000 "is_configured": true, 00:16:30.000 "data_offset": 0, 00:16:30.000 "data_size": 65536 00:16:30.000 }, 00:16:30.000 { 00:16:30.000 "name": "BaseBdev2", 00:16:30.000 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:30.000 "is_configured": true, 00:16:30.000 "data_offset": 0, 00:16:30.000 "data_size": 65536 00:16:30.000 } 00:16:30.000 ] 00:16:30.000 }' 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.000 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.001 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.259 "name": "raid_bdev1", 00:16:30.259 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:30.259 "strip_size_kb": 0, 00:16:30.259 "state": "online", 00:16:30.259 "raid_level": "raid1", 00:16:30.259 "superblock": false, 00:16:30.259 "num_base_bdevs": 2, 00:16:30.259 "num_base_bdevs_discovered": 2, 00:16:30.259 "num_base_bdevs_operational": 2, 00:16:30.259 "base_bdevs_list": [ 00:16:30.259 { 00:16:30.259 "name": "spare", 00:16:30.259 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:30.259 "is_configured": true, 00:16:30.259 "data_offset": 0, 00:16:30.259 "data_size": 65536 00:16:30.259 }, 00:16:30.259 { 00:16:30.259 "name": "BaseBdev2", 00:16:30.259 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:30.259 "is_configured": true, 00:16:30.259 "data_offset": 0, 00:16:30.259 "data_size": 65536 00:16:30.259 } 00:16:30.259 ] 00:16:30.259 }' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.259 "name": "raid_bdev1", 00:16:30.259 "uuid": "3f07707a-2f12-42c3-b10f-9b7afad60d39", 00:16:30.259 "strip_size_kb": 0, 00:16:30.259 "state": "online", 00:16:30.259 "raid_level": "raid1", 00:16:30.259 "superblock": false, 00:16:30.259 "num_base_bdevs": 2, 00:16:30.259 "num_base_bdevs_discovered": 2, 00:16:30.259 "num_base_bdevs_operational": 2, 00:16:30.259 "base_bdevs_list": [ 00:16:30.259 { 00:16:30.259 "name": "spare", 00:16:30.259 "uuid": "4e9a81d2-cf42-5dd9-817a-a8e1e770716d", 00:16:30.259 "is_configured": true, 00:16:30.259 "data_offset": 0, 00:16:30.259 "data_size": 65536 00:16:30.259 }, 00:16:30.259 { 00:16:30.259 "name": "BaseBdev2", 00:16:30.259 "uuid": "6dbd83a7-eff6-5ce1-a581-fa2e0ff60d86", 00:16:30.259 "is_configured": true, 00:16:30.259 "data_offset": 0, 00:16:30.259 "data_size": 65536 00:16:30.259 } 00:16:30.259 ] 00:16:30.259 }' 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.259 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.826 07:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.826 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.826 07:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.826 [2024-11-20 07:13:12.892711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.826 [2024-11-20 07:13:12.892855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.826 00:16:30.826 Latency(us) 00:16:30.826 [2024-11-20T07:13:13.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.826 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:30.826 raid_bdev1 : 8.98 83.00 248.99 0.00 0.00 16799.54 339.84 116304.94 00:16:30.826 [2024-11-20T07:13:13.091Z] =================================================================================================================== 00:16:30.826 [2024-11-20T07:13:13.091Z] Total : 83.00 248.99 0.00 0.00 16799.54 339.84 116304.94 00:16:30.827 [2024-11-20 07:13:13.008520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.827 [2024-11-20 07:13:13.008683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.827 [2024-11-20 07:13:13.008820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.827 [2024-11-20 07:13:13.008881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.827 { 00:16:30.827 "results": [ 00:16:30.827 { 00:16:30.827 "job": "raid_bdev1", 00:16:30.827 "core_mask": "0x1", 00:16:30.827 "workload": "randrw", 00:16:30.827 "percentage": 50, 00:16:30.827 "status": "finished", 00:16:30.827 "queue_depth": 2, 00:16:30.827 "io_size": 3145728, 00:16:30.827 "runtime": 8.976256, 00:16:30.827 "iops": 82.99674162590728, 00:16:30.827 "mibps": 248.99022487772183, 00:16:30.827 "io_failed": 0, 00:16:30.827 "io_timeout": 0, 00:16:30.827 "avg_latency_us": 16799.540946631107, 00:16:30.827 "min_latency_us": 339.8427947598253, 00:16:30.827 "max_latency_us": 116304.93624454149 00:16:30.827 } 00:16:30.827 ], 00:16:30.827 "core_count": 1 00:16:30.827 } 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.827 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:31.393 /dev/nbd0 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.393 1+0 records in 00:16:31.393 1+0 records out 00:16:31.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402064 s, 10.2 MB/s 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.393 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:31.652 /dev/nbd1 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.652 1+0 records in 00:16:31.652 1+0 records out 00:16:31.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373682 s, 11.0 MB/s 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.652 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.909 07:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.166 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76855 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76855 ']' 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76855 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76855 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76855' 00:16:32.425 killing process with pid 76855 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76855 00:16:32.425 Received shutdown signal, test time was about 10.523852 seconds 00:16:32.425 00:16:32.425 Latency(us) 00:16:32.425 [2024-11-20T07:13:14.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.425 [2024-11-20T07:13:14.690Z] =================================================================================================================== 00:16:32.425 [2024-11-20T07:13:14.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.425 [2024-11-20 07:13:14.525131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.425 07:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76855 00:16:32.683 [2024-11-20 07:13:14.804499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:34.055 00:16:34.055 real 0m14.079s 00:16:34.055 user 0m17.638s 00:16:34.055 sys 0m1.622s 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.055 ************************************ 00:16:34.055 END TEST raid_rebuild_test_io 00:16:34.055 ************************************ 00:16:34.055 07:13:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:34.055 07:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:34.055 07:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.055 07:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.055 ************************************ 00:16:34.055 START TEST raid_rebuild_test_sb_io 00:16:34.055 ************************************ 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77258 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77258 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77258 ']' 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.055 07:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.313 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.313 Zero copy mechanism will not be used. 00:16:34.313 [2024-11-20 07:13:16.413093] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:34.313 [2024-11-20 07:13:16.413285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77258 ] 00:16:34.571 [2024-11-20 07:13:16.595437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.571 [2024-11-20 07:13:16.779624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.829 [2024-11-20 07:13:17.059971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.829 [2024-11-20 07:13:17.060072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.395 BaseBdev1_malloc 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.395 [2024-11-20 07:13:17.422977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.395 [2024-11-20 07:13:17.423094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.395 [2024-11-20 07:13:17.423128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.395 [2024-11-20 07:13:17.423143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.395 [2024-11-20 07:13:17.426163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.395 [2024-11-20 07:13:17.426222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.395 BaseBdev1 00:16:35.395 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 BaseBdev2_malloc 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 [2024-11-20 07:13:17.491011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.396 [2024-11-20 07:13:17.491121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.396 [2024-11-20 07:13:17.491149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.396 [2024-11-20 07:13:17.491166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.396 [2024-11-20 07:13:17.494099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.396 [2024-11-20 07:13:17.494155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.396 BaseBdev2 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 spare_malloc 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 spare_delay 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 [2024-11-20 07:13:17.581683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.396 [2024-11-20 07:13:17.581872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.396 [2024-11-20 07:13:17.581953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.396 [2024-11-20 07:13:17.581996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.396 [2024-11-20 07:13:17.588452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.396 [2024-11-20 07:13:17.588565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.396 spare 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 [2024-11-20 07:13:17.597033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.396 [2024-11-20 07:13:17.600449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.396 [2024-11-20 07:13:17.600741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.396 [2024-11-20 07:13:17.600778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.396 [2024-11-20 07:13:17.601227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:35.396 [2024-11-20 07:13:17.601522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.396 [2024-11-20 07:13:17.601549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.396 [2024-11-20 07:13:17.601884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.396 "name": "raid_bdev1", 00:16:35.396 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:35.396 "strip_size_kb": 0, 00:16:35.396 "state": "online", 00:16:35.396 "raid_level": "raid1", 00:16:35.396 "superblock": true, 00:16:35.396 "num_base_bdevs": 2, 00:16:35.396 "num_base_bdevs_discovered": 2, 00:16:35.396 "num_base_bdevs_operational": 2, 00:16:35.396 "base_bdevs_list": [ 00:16:35.396 { 00:16:35.396 "name": "BaseBdev1", 00:16:35.396 "uuid": "d4a301a6-8e95-5849-9d3a-8c3f0d9f131c", 00:16:35.396 "is_configured": true, 00:16:35.396 "data_offset": 2048, 00:16:35.396 "data_size": 63488 00:16:35.396 }, 00:16:35.396 { 00:16:35.396 "name": "BaseBdev2", 00:16:35.396 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:35.396 "is_configured": true, 00:16:35.396 "data_offset": 2048, 00:16:35.396 "data_size": 63488 00:16:35.396 } 00:16:35.396 ] 00:16:35.396 }' 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.396 07:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:35.962 [2024-11-20 07:13:18.072505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.962 [2024-11-20 07:13:18.175969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.962 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.220 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.220 "name": "raid_bdev1", 00:16:36.220 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:36.220 "strip_size_kb": 0, 00:16:36.220 "state": "online", 00:16:36.220 "raid_level": "raid1", 00:16:36.220 "superblock": true, 00:16:36.220 "num_base_bdevs": 2, 00:16:36.220 "num_base_bdevs_discovered": 1, 00:16:36.220 "num_base_bdevs_operational": 1, 00:16:36.220 "base_bdevs_list": [ 00:16:36.220 { 00:16:36.220 "name": null, 00:16:36.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.220 "is_configured": false, 00:16:36.220 "data_offset": 0, 00:16:36.220 "data_size": 63488 00:16:36.220 }, 00:16:36.220 { 00:16:36.220 "name": "BaseBdev2", 00:16:36.220 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:36.220 "is_configured": true, 00:16:36.220 "data_offset": 2048, 00:16:36.220 "data_size": 63488 00:16:36.220 } 00:16:36.220 ] 00:16:36.220 }' 00:16:36.220 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.220 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.220 [2024-11-20 07:13:18.334094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:36.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:36.220 Zero copy mechanism will not be used. 00:16:36.220 Running I/O for 60 seconds... 00:16:36.479 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.479 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.479 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.479 [2024-11-20 07:13:18.635741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.479 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.479 07:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:36.479 [2024-11-20 07:13:18.701447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:36.479 [2024-11-20 07:13:18.703755] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.736 [2024-11-20 07:13:18.820011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:36.736 [2024-11-20 07:13:18.820689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:36.994 [2024-11-20 07:13:19.039204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:36.994 [2024-11-20 07:13:19.039646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:37.251 149.00 IOPS, 447.00 MiB/s [2024-11-20T07:13:19.516Z] [2024-11-20 07:13:19.412963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:37.251 [2024-11-20 07:13:19.413381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.509 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.509 "name": "raid_bdev1", 00:16:37.509 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:37.509 "strip_size_kb": 0, 00:16:37.509 "state": "online", 00:16:37.509 "raid_level": "raid1", 00:16:37.509 "superblock": true, 00:16:37.509 "num_base_bdevs": 2, 00:16:37.509 "num_base_bdevs_discovered": 2, 00:16:37.509 "num_base_bdevs_operational": 2, 00:16:37.509 "process": { 00:16:37.509 "type": "rebuild", 00:16:37.509 "target": "spare", 00:16:37.509 "progress": { 00:16:37.509 "blocks": 12288, 00:16:37.509 "percent": 19 00:16:37.509 } 00:16:37.509 }, 00:16:37.509 "base_bdevs_list": [ 00:16:37.509 { 00:16:37.510 "name": "spare", 00:16:37.510 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:37.510 "is_configured": true, 00:16:37.510 "data_offset": 2048, 00:16:37.510 "data_size": 63488 00:16:37.510 }, 00:16:37.510 { 00:16:37.510 "name": "BaseBdev2", 00:16:37.510 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:37.510 "is_configured": true, 00:16:37.510 "data_offset": 2048, 00:16:37.510 "data_size": 63488 00:16:37.510 } 00:16:37.510 ] 00:16:37.510 }' 00:16:37.510 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.768 07:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.768 [2024-11-20 07:13:19.863978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.768 [2024-11-20 07:13:19.981317] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.768 [2024-11-20 07:13:19.984755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.768 [2024-11-20 07:13:19.984827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.768 [2024-11-20 07:13:19.984846] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.768 [2024-11-20 07:13:20.024804] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.027 "name": "raid_bdev1", 00:16:38.027 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:38.027 "strip_size_kb": 0, 00:16:38.027 "state": "online", 00:16:38.027 "raid_level": "raid1", 00:16:38.027 "superblock": true, 00:16:38.027 "num_base_bdevs": 2, 00:16:38.027 "num_base_bdevs_discovered": 1, 00:16:38.027 "num_base_bdevs_operational": 1, 00:16:38.027 "base_bdevs_list": [ 00:16:38.027 { 00:16:38.027 "name": null, 00:16:38.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.027 "is_configured": false, 00:16:38.027 "data_offset": 0, 00:16:38.027 "data_size": 63488 00:16:38.027 }, 00:16:38.027 { 00:16:38.027 "name": "BaseBdev2", 00:16:38.027 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:38.027 "is_configured": true, 00:16:38.027 "data_offset": 2048, 00:16:38.027 "data_size": 63488 00:16:38.027 } 00:16:38.027 ] 00:16:38.027 }' 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.027 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.286 135.00 IOPS, 405.00 MiB/s [2024-11-20T07:13:20.551Z] 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.286 "name": "raid_bdev1", 00:16:38.286 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:38.286 "strip_size_kb": 0, 00:16:38.286 "state": "online", 00:16:38.286 "raid_level": "raid1", 00:16:38.286 "superblock": true, 00:16:38.286 "num_base_bdevs": 2, 00:16:38.286 "num_base_bdevs_discovered": 1, 00:16:38.286 "num_base_bdevs_operational": 1, 00:16:38.286 "base_bdevs_list": [ 00:16:38.286 { 00:16:38.286 "name": null, 00:16:38.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.286 "is_configured": false, 00:16:38.286 "data_offset": 0, 00:16:38.286 "data_size": 63488 00:16:38.286 }, 00:16:38.286 { 00:16:38.286 "name": "BaseBdev2", 00:16:38.286 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:38.286 "is_configured": true, 00:16:38.286 "data_offset": 2048, 00:16:38.286 "data_size": 63488 00:16:38.286 } 00:16:38.286 ] 00:16:38.286 }' 00:16:38.286 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.543 [2024-11-20 07:13:20.635071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.543 07:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:38.543 [2024-11-20 07:13:20.691206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:38.543 [2024-11-20 07:13:20.693508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.543 [2024-11-20 07:13:20.804267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:38.543 [2024-11-20 07:13:20.804946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:38.801 [2024-11-20 07:13:21.023819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:38.801 [2024-11-20 07:13:21.024195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:39.059 [2024-11-20 07:13:21.271045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:39.059 [2024-11-20 07:13:21.271745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:39.317 150.67 IOPS, 452.00 MiB/s [2024-11-20T07:13:21.582Z] [2024-11-20 07:13:21.481604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:39.317 [2024-11-20 07:13:21.481994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.575 "name": "raid_bdev1", 00:16:39.575 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:39.575 "strip_size_kb": 0, 00:16:39.575 "state": "online", 00:16:39.575 "raid_level": "raid1", 00:16:39.575 "superblock": true, 00:16:39.575 "num_base_bdevs": 2, 00:16:39.575 "num_base_bdevs_discovered": 2, 00:16:39.575 "num_base_bdevs_operational": 2, 00:16:39.575 "process": { 00:16:39.575 "type": "rebuild", 00:16:39.575 "target": "spare", 00:16:39.575 "progress": { 00:16:39.575 "blocks": 12288, 00:16:39.575 "percent": 19 00:16:39.575 } 00:16:39.575 }, 00:16:39.575 "base_bdevs_list": [ 00:16:39.575 { 00:16:39.575 "name": "spare", 00:16:39.575 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:39.575 "is_configured": true, 00:16:39.575 "data_offset": 2048, 00:16:39.575 "data_size": 63488 00:16:39.575 }, 00:16:39.575 { 00:16:39.575 "name": "BaseBdev2", 00:16:39.575 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:39.575 "is_configured": true, 00:16:39.575 "data_offset": 2048, 00:16:39.575 "data_size": 63488 00:16:39.575 } 00:16:39.575 ] 00:16:39.575 }' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:39.575 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=437 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.575 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.576 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.834 "name": "raid_bdev1", 00:16:39.834 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:39.834 "strip_size_kb": 0, 00:16:39.834 "state": "online", 00:16:39.834 "raid_level": "raid1", 00:16:39.834 "superblock": true, 00:16:39.834 "num_base_bdevs": 2, 00:16:39.834 "num_base_bdevs_discovered": 2, 00:16:39.834 "num_base_bdevs_operational": 2, 00:16:39.834 "process": { 00:16:39.834 "type": "rebuild", 00:16:39.834 "target": "spare", 00:16:39.834 "progress": { 00:16:39.834 "blocks": 14336, 00:16:39.834 "percent": 22 00:16:39.834 } 00:16:39.834 }, 00:16:39.834 "base_bdevs_list": [ 00:16:39.834 { 00:16:39.834 "name": "spare", 00:16:39.834 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:39.834 "is_configured": true, 00:16:39.834 "data_offset": 2048, 00:16:39.834 "data_size": 63488 00:16:39.834 }, 00:16:39.834 { 00:16:39.834 "name": "BaseBdev2", 00:16:39.834 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:39.834 "is_configured": true, 00:16:39.834 "data_offset": 2048, 00:16:39.834 "data_size": 63488 00:16:39.834 } 00:16:39.834 ] 00:16:39.834 }' 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.834 [2024-11-20 07:13:21.866511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.834 07:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.093 [2024-11-20 07:13:22.224171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:40.350 136.00 IOPS, 408.00 MiB/s [2024-11-20T07:13:22.615Z] [2024-11-20 07:13:22.456865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:40.608 [2024-11-20 07:13:22.810264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:40.608 [2024-11-20 07:13:22.810937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.865 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.866 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.866 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.866 "name": "raid_bdev1", 00:16:40.866 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:40.866 "strip_size_kb": 0, 00:16:40.866 "state": "online", 00:16:40.866 "raid_level": "raid1", 00:16:40.866 "superblock": true, 00:16:40.866 "num_base_bdevs": 2, 00:16:40.866 "num_base_bdevs_discovered": 2, 00:16:40.866 "num_base_bdevs_operational": 2, 00:16:40.866 "process": { 00:16:40.866 "type": "rebuild", 00:16:40.866 "target": "spare", 00:16:40.866 "progress": { 00:16:40.866 "blocks": 32768, 00:16:40.866 "percent": 51 00:16:40.866 } 00:16:40.866 }, 00:16:40.866 "base_bdevs_list": [ 00:16:40.866 { 00:16:40.866 "name": "spare", 00:16:40.866 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:40.866 "is_configured": true, 00:16:40.866 "data_offset": 2048, 00:16:40.866 "data_size": 63488 00:16:40.866 }, 00:16:40.866 { 00:16:40.866 "name": "BaseBdev2", 00:16:40.866 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:40.866 "is_configured": true, 00:16:40.866 "data_offset": 2048, 00:16:40.866 "data_size": 63488 00:16:40.866 } 00:16:40.866 ] 00:16:40.866 }' 00:16:40.866 07:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.866 [2024-11-20 07:13:23.013708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:40.866 07:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.866 07:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.866 07:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.866 07:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.123 [2024-11-20 07:13:23.224691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:41.381 118.80 IOPS, 356.40 MiB/s [2024-11-20T07:13:23.646Z] [2024-11-20 07:13:23.428065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:41.638 [2024-11-20 07:13:23.797063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:41.895 [2024-11-20 07:13:24.034848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.895 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.896 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.896 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.896 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.896 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.896 "name": "raid_bdev1", 00:16:41.896 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:41.896 "strip_size_kb": 0, 00:16:41.896 "state": "online", 00:16:41.896 "raid_level": "raid1", 00:16:41.896 "superblock": true, 00:16:41.896 "num_base_bdevs": 2, 00:16:41.896 "num_base_bdevs_discovered": 2, 00:16:41.896 "num_base_bdevs_operational": 2, 00:16:41.896 "process": { 00:16:41.896 "type": "rebuild", 00:16:41.896 "target": "spare", 00:16:41.896 "progress": { 00:16:41.896 "blocks": 47104, 00:16:41.896 "percent": 74 00:16:41.896 } 00:16:41.896 }, 00:16:41.896 "base_bdevs_list": [ 00:16:41.896 { 00:16:41.896 "name": "spare", 00:16:41.896 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:41.896 "is_configured": true, 00:16:41.896 "data_offset": 2048, 00:16:41.896 "data_size": 63488 00:16:41.896 }, 00:16:41.896 { 00:16:41.896 "name": "BaseBdev2", 00:16:41.896 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:41.896 "is_configured": true, 00:16:41.896 "data_offset": 2048, 00:16:41.896 "data_size": 63488 00:16:41.896 } 00:16:41.896 ] 00:16:41.896 }' 00:16:41.896 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.153 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.153 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.153 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.153 07:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.153 106.33 IOPS, 319.00 MiB/s [2024-11-20T07:13:24.418Z] [2024-11-20 07:13:24.355742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:42.153 [2024-11-20 07:13:24.356513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:43.103 [2024-11-20 07:13:25.127737] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.103 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.104 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.104 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.104 [2024-11-20 07:13:25.231667] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:43.104 [2024-11-20 07:13:25.234419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.104 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.104 "name": "raid_bdev1", 00:16:43.104 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:43.104 "strip_size_kb": 0, 00:16:43.104 "state": "online", 00:16:43.104 "raid_level": "raid1", 00:16:43.104 "superblock": true, 00:16:43.104 "num_base_bdevs": 2, 00:16:43.104 "num_base_bdevs_discovered": 2, 00:16:43.104 "num_base_bdevs_operational": 2, 00:16:43.104 "process": { 00:16:43.104 "type": "rebuild", 00:16:43.104 "target": "spare", 00:16:43.104 "progress": { 00:16:43.104 "blocks": 63488, 00:16:43.104 "percent": 100 00:16:43.104 } 00:16:43.104 }, 00:16:43.104 "base_bdevs_list": [ 00:16:43.104 { 00:16:43.104 "name": "spare", 00:16:43.104 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:43.104 "is_configured": true, 00:16:43.104 "data_offset": 2048, 00:16:43.104 "data_size": 63488 00:16:43.104 }, 00:16:43.104 { 00:16:43.104 "name": "BaseBdev2", 00:16:43.104 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:43.104 "is_configured": true, 00:16:43.104 "data_offset": 2048, 00:16:43.104 "data_size": 63488 00:16:43.104 } 00:16:43.105 ] 00:16:43.105 }' 00:16:43.105 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.105 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.105 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.105 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.105 07:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.485 95.71 IOPS, 287.14 MiB/s [2024-11-20T07:13:26.750Z] 88.00 IOPS, 264.00 MiB/s [2024-11-20T07:13:26.750Z] 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.485 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.485 "name": "raid_bdev1", 00:16:44.485 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:44.485 "strip_size_kb": 0, 00:16:44.485 "state": "online", 00:16:44.485 "raid_level": "raid1", 00:16:44.485 "superblock": true, 00:16:44.485 "num_base_bdevs": 2, 00:16:44.485 "num_base_bdevs_discovered": 2, 00:16:44.485 "num_base_bdevs_operational": 2, 00:16:44.485 "base_bdevs_list": [ 00:16:44.485 { 00:16:44.486 "name": "spare", 00:16:44.486 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 }, 00:16:44.486 { 00:16:44.486 "name": "BaseBdev2", 00:16:44.486 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 } 00:16:44.486 ] 00:16:44.486 }' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.486 "name": "raid_bdev1", 00:16:44.486 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:44.486 "strip_size_kb": 0, 00:16:44.486 "state": "online", 00:16:44.486 "raid_level": "raid1", 00:16:44.486 "superblock": true, 00:16:44.486 "num_base_bdevs": 2, 00:16:44.486 "num_base_bdevs_discovered": 2, 00:16:44.486 "num_base_bdevs_operational": 2, 00:16:44.486 "base_bdevs_list": [ 00:16:44.486 { 00:16:44.486 "name": "spare", 00:16:44.486 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 }, 00:16:44.486 { 00:16:44.486 "name": "BaseBdev2", 00:16:44.486 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 } 00:16:44.486 ] 00:16:44.486 }' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.486 "name": "raid_bdev1", 00:16:44.486 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:44.486 "strip_size_kb": 0, 00:16:44.486 "state": "online", 00:16:44.486 "raid_level": "raid1", 00:16:44.486 "superblock": true, 00:16:44.486 "num_base_bdevs": 2, 00:16:44.486 "num_base_bdevs_discovered": 2, 00:16:44.486 "num_base_bdevs_operational": 2, 00:16:44.486 "base_bdevs_list": [ 00:16:44.486 { 00:16:44.486 "name": "spare", 00:16:44.486 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 }, 00:16:44.486 { 00:16:44.486 "name": "BaseBdev2", 00:16:44.486 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:44.486 "is_configured": true, 00:16:44.486 "data_offset": 2048, 00:16:44.486 "data_size": 63488 00:16:44.486 } 00:16:44.486 ] 00:16:44.486 }' 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.486 07:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.052 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 [2024-11-20 07:13:27.104234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.052 [2024-11-20 07:13:27.104289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.052 00:16:45.052 Latency(us) 00:16:45.052 [2024-11-20T07:13:27.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.052 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:45.052 raid_bdev1 : 8.87 83.29 249.87 0.00 0.00 16131.52 357.73 119052.30 00:16:45.052 [2024-11-20T07:13:27.317Z] =================================================================================================================== 00:16:45.052 [2024-11-20T07:13:27.317Z] Total : 83.29 249.87 0.00 0.00 16131.52 357.73 119052.30 00:16:45.052 [2024-11-20 07:13:27.220081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.052 [2024-11-20 07:13:27.220148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.052 [2024-11-20 07:13:27.220253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.052 [2024-11-20 07:13:27.220267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.052 { 00:16:45.052 "results": [ 00:16:45.052 { 00:16:45.052 "job": "raid_bdev1", 00:16:45.052 "core_mask": "0x1", 00:16:45.052 "workload": "randrw", 00:16:45.052 "percentage": 50, 00:16:45.052 "status": "finished", 00:16:45.052 "queue_depth": 2, 00:16:45.053 "io_size": 3145728, 00:16:45.053 "runtime": 8.872502, 00:16:45.053 "iops": 83.29104913134987, 00:16:45.053 "mibps": 249.8731473940496, 00:16:45.053 "io_failed": 0, 00:16:45.053 "io_timeout": 0, 00:16:45.053 "avg_latency_us": 16131.51769120315, 00:16:45.053 "min_latency_us": 357.7292576419214, 00:16:45.053 "max_latency_us": 119052.29694323144 00:16:45.053 } 00:16:45.053 ], 00:16:45.053 "core_count": 1 00:16:45.053 } 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.053 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:45.310 /dev/nbd0 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.568 1+0 records in 00:16:45.568 1+0 records out 00:16:45.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414412 s, 9.9 MB/s 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.568 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:45.825 /dev/nbd1 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.825 1+0 records in 00:16:45.825 1+0 records out 00:16:45.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552409 s, 7.4 MB/s 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.825 07:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.082 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.340 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.647 [2024-11-20 07:13:28.795536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.647 [2024-11-20 07:13:28.795625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.647 [2024-11-20 07:13:28.795655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:46.647 [2024-11-20 07:13:28.795667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.647 [2024-11-20 07:13:28.798303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.647 [2024-11-20 07:13:28.798377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.647 [2024-11-20 07:13:28.798508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:46.647 [2024-11-20 07:13:28.798586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.647 [2024-11-20 07:13:28.798761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.647 spare 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.647 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.906 [2024-11-20 07:13:28.898706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:46.906 [2024-11-20 07:13:28.898801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:46.906 [2024-11-20 07:13:28.899222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:46.906 [2024-11-20 07:13:28.899504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:46.906 [2024-11-20 07:13:28.899527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:46.906 [2024-11-20 07:13:28.899782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.906 "name": "raid_bdev1", 00:16:46.906 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:46.906 "strip_size_kb": 0, 00:16:46.906 "state": "online", 00:16:46.906 "raid_level": "raid1", 00:16:46.906 "superblock": true, 00:16:46.906 "num_base_bdevs": 2, 00:16:46.906 "num_base_bdevs_discovered": 2, 00:16:46.906 "num_base_bdevs_operational": 2, 00:16:46.906 "base_bdevs_list": [ 00:16:46.906 { 00:16:46.906 "name": "spare", 00:16:46.906 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:46.906 "is_configured": true, 00:16:46.906 "data_offset": 2048, 00:16:46.906 "data_size": 63488 00:16:46.906 }, 00:16:46.906 { 00:16:46.906 "name": "BaseBdev2", 00:16:46.906 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:46.906 "is_configured": true, 00:16:46.906 "data_offset": 2048, 00:16:46.906 "data_size": 63488 00:16:46.906 } 00:16:46.906 ] 00:16:46.906 }' 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.906 07:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.164 "name": "raid_bdev1", 00:16:47.164 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:47.164 "strip_size_kb": 0, 00:16:47.164 "state": "online", 00:16:47.164 "raid_level": "raid1", 00:16:47.164 "superblock": true, 00:16:47.164 "num_base_bdevs": 2, 00:16:47.164 "num_base_bdevs_discovered": 2, 00:16:47.164 "num_base_bdevs_operational": 2, 00:16:47.164 "base_bdevs_list": [ 00:16:47.164 { 00:16:47.164 "name": "spare", 00:16:47.164 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:47.164 "is_configured": true, 00:16:47.164 "data_offset": 2048, 00:16:47.164 "data_size": 63488 00:16:47.164 }, 00:16:47.164 { 00:16:47.164 "name": "BaseBdev2", 00:16:47.164 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:47.164 "is_configured": true, 00:16:47.164 "data_offset": 2048, 00:16:47.164 "data_size": 63488 00:16:47.164 } 00:16:47.164 ] 00:16:47.164 }' 00:16:47.164 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.422 [2024-11-20 07:13:29.566773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.422 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.422 "name": "raid_bdev1", 00:16:47.422 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:47.422 "strip_size_kb": 0, 00:16:47.422 "state": "online", 00:16:47.422 "raid_level": "raid1", 00:16:47.422 "superblock": true, 00:16:47.422 "num_base_bdevs": 2, 00:16:47.422 "num_base_bdevs_discovered": 1, 00:16:47.422 "num_base_bdevs_operational": 1, 00:16:47.422 "base_bdevs_list": [ 00:16:47.422 { 00:16:47.422 "name": null, 00:16:47.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.423 "is_configured": false, 00:16:47.423 "data_offset": 0, 00:16:47.423 "data_size": 63488 00:16:47.423 }, 00:16:47.423 { 00:16:47.423 "name": "BaseBdev2", 00:16:47.423 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:47.423 "is_configured": true, 00:16:47.423 "data_offset": 2048, 00:16:47.423 "data_size": 63488 00:16:47.423 } 00:16:47.423 ] 00:16:47.423 }' 00:16:47.423 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.423 07:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.988 07:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.988 07:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.988 07:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.988 [2024-11-20 07:13:30.062478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.988 [2024-11-20 07:13:30.062739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.988 [2024-11-20 07:13:30.062772] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:47.988 [2024-11-20 07:13:30.062819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.988 [2024-11-20 07:13:30.082967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:47.988 07:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.988 07:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:47.988 [2024-11-20 07:13:30.085216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.943 "name": "raid_bdev1", 00:16:48.943 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:48.943 "strip_size_kb": 0, 00:16:48.943 "state": "online", 00:16:48.943 "raid_level": "raid1", 00:16:48.943 "superblock": true, 00:16:48.943 "num_base_bdevs": 2, 00:16:48.943 "num_base_bdevs_discovered": 2, 00:16:48.943 "num_base_bdevs_operational": 2, 00:16:48.943 "process": { 00:16:48.943 "type": "rebuild", 00:16:48.943 "target": "spare", 00:16:48.943 "progress": { 00:16:48.943 "blocks": 20480, 00:16:48.943 "percent": 32 00:16:48.943 } 00:16:48.943 }, 00:16:48.943 "base_bdevs_list": [ 00:16:48.943 { 00:16:48.943 "name": "spare", 00:16:48.943 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:48.943 "is_configured": true, 00:16:48.943 "data_offset": 2048, 00:16:48.943 "data_size": 63488 00:16:48.943 }, 00:16:48.943 { 00:16:48.943 "name": "BaseBdev2", 00:16:48.943 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:48.943 "is_configured": true, 00:16:48.943 "data_offset": 2048, 00:16:48.943 "data_size": 63488 00:16:48.943 } 00:16:48.943 ] 00:16:48.943 }' 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.943 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.944 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.200 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.200 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:49.200 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.200 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.200 [2024-11-20 07:13:31.217219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.200 [2024-11-20 07:13:31.291838] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.200 [2024-11-20 07:13:31.291979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.200 [2024-11-20 07:13:31.292000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.201 [2024-11-20 07:13:31.292016] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.201 "name": "raid_bdev1", 00:16:49.201 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:49.201 "strip_size_kb": 0, 00:16:49.201 "state": "online", 00:16:49.201 "raid_level": "raid1", 00:16:49.201 "superblock": true, 00:16:49.201 "num_base_bdevs": 2, 00:16:49.201 "num_base_bdevs_discovered": 1, 00:16:49.201 "num_base_bdevs_operational": 1, 00:16:49.201 "base_bdevs_list": [ 00:16:49.201 { 00:16:49.201 "name": null, 00:16:49.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.201 "is_configured": false, 00:16:49.201 "data_offset": 0, 00:16:49.201 "data_size": 63488 00:16:49.201 }, 00:16:49.201 { 00:16:49.201 "name": "BaseBdev2", 00:16:49.201 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:49.201 "is_configured": true, 00:16:49.201 "data_offset": 2048, 00:16:49.201 "data_size": 63488 00:16:49.201 } 00:16:49.201 ] 00:16:49.201 }' 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.201 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.792 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.792 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.792 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.792 [2024-11-20 07:13:31.809079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.792 [2024-11-20 07:13:31.809195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.792 [2024-11-20 07:13:31.809225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:49.792 [2024-11-20 07:13:31.809239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.792 [2024-11-20 07:13:31.809856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.792 [2024-11-20 07:13:31.809894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.792 [2024-11-20 07:13:31.810012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:49.792 [2024-11-20 07:13:31.810038] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.792 [2024-11-20 07:13:31.810050] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:49.792 [2024-11-20 07:13:31.810083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.792 [2024-11-20 07:13:31.830527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:49.792 spare 00:16:49.792 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.792 07:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:49.792 [2024-11-20 07:13:31.832767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.737 "name": "raid_bdev1", 00:16:50.737 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:50.737 "strip_size_kb": 0, 00:16:50.737 "state": "online", 00:16:50.737 "raid_level": "raid1", 00:16:50.737 "superblock": true, 00:16:50.737 "num_base_bdevs": 2, 00:16:50.737 "num_base_bdevs_discovered": 2, 00:16:50.737 "num_base_bdevs_operational": 2, 00:16:50.737 "process": { 00:16:50.737 "type": "rebuild", 00:16:50.737 "target": "spare", 00:16:50.737 "progress": { 00:16:50.737 "blocks": 20480, 00:16:50.737 "percent": 32 00:16:50.737 } 00:16:50.737 }, 00:16:50.737 "base_bdevs_list": [ 00:16:50.737 { 00:16:50.737 "name": "spare", 00:16:50.737 "uuid": "edc75d66-de46-5de2-aa19-6daadc7f3c7c", 00:16:50.737 "is_configured": true, 00:16:50.737 "data_offset": 2048, 00:16:50.738 "data_size": 63488 00:16:50.738 }, 00:16:50.738 { 00:16:50.738 "name": "BaseBdev2", 00:16:50.738 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:50.738 "is_configured": true, 00:16:50.738 "data_offset": 2048, 00:16:50.738 "data_size": 63488 00:16:50.738 } 00:16:50.738 ] 00:16:50.738 }' 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 07:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 [2024-11-20 07:13:32.980447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.995 [2024-11-20 07:13:33.039469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:50.995 [2024-11-20 07:13:33.039567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.995 [2024-11-20 07:13:33.039593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.995 [2024-11-20 07:13:33.039602] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.995 "name": "raid_bdev1", 00:16:50.995 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:50.995 "strip_size_kb": 0, 00:16:50.995 "state": "online", 00:16:50.995 "raid_level": "raid1", 00:16:50.995 "superblock": true, 00:16:50.995 "num_base_bdevs": 2, 00:16:50.995 "num_base_bdevs_discovered": 1, 00:16:50.995 "num_base_bdevs_operational": 1, 00:16:50.995 "base_bdevs_list": [ 00:16:50.995 { 00:16:50.995 "name": null, 00:16:50.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.995 "is_configured": false, 00:16:50.995 "data_offset": 0, 00:16:50.995 "data_size": 63488 00:16:50.995 }, 00:16:50.995 { 00:16:50.995 "name": "BaseBdev2", 00:16:50.995 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:50.995 "is_configured": true, 00:16:50.995 "data_offset": 2048, 00:16:50.995 "data_size": 63488 00:16:50.995 } 00:16:50.995 ] 00:16:50.995 }' 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.995 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.561 "name": "raid_bdev1", 00:16:51.561 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:51.561 "strip_size_kb": 0, 00:16:51.561 "state": "online", 00:16:51.561 "raid_level": "raid1", 00:16:51.561 "superblock": true, 00:16:51.561 "num_base_bdevs": 2, 00:16:51.561 "num_base_bdevs_discovered": 1, 00:16:51.561 "num_base_bdevs_operational": 1, 00:16:51.561 "base_bdevs_list": [ 00:16:51.561 { 00:16:51.561 "name": null, 00:16:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.561 "is_configured": false, 00:16:51.561 "data_offset": 0, 00:16:51.561 "data_size": 63488 00:16:51.561 }, 00:16:51.561 { 00:16:51.561 "name": "BaseBdev2", 00:16:51.561 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:51.561 "is_configured": true, 00:16:51.561 "data_offset": 2048, 00:16:51.561 "data_size": 63488 00:16:51.561 } 00:16:51.561 ] 00:16:51.561 }' 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.561 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.561 [2024-11-20 07:13:33.720521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:51.561 [2024-11-20 07:13:33.720612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.561 [2024-11-20 07:13:33.720640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:51.561 [2024-11-20 07:13:33.720651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.562 [2024-11-20 07:13:33.721214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.562 [2024-11-20 07:13:33.721251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:51.562 [2024-11-20 07:13:33.721368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:51.562 [2024-11-20 07:13:33.721388] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.562 [2024-11-20 07:13:33.721402] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:51.562 [2024-11-20 07:13:33.721416] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:51.562 BaseBdev1 00:16:51.562 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.562 07:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.497 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.757 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.757 "name": "raid_bdev1", 00:16:52.757 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:52.757 "strip_size_kb": 0, 00:16:52.757 "state": "online", 00:16:52.757 "raid_level": "raid1", 00:16:52.757 "superblock": true, 00:16:52.757 "num_base_bdevs": 2, 00:16:52.757 "num_base_bdevs_discovered": 1, 00:16:52.757 "num_base_bdevs_operational": 1, 00:16:52.757 "base_bdevs_list": [ 00:16:52.757 { 00:16:52.757 "name": null, 00:16:52.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.757 "is_configured": false, 00:16:52.757 "data_offset": 0, 00:16:52.757 "data_size": 63488 00:16:52.757 }, 00:16:52.757 { 00:16:52.757 "name": "BaseBdev2", 00:16:52.757 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:52.757 "is_configured": true, 00:16:52.757 "data_offset": 2048, 00:16:52.757 "data_size": 63488 00:16:52.757 } 00:16:52.757 ] 00:16:52.757 }' 00:16:52.757 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.757 07:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.049 "name": "raid_bdev1", 00:16:53.049 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:53.049 "strip_size_kb": 0, 00:16:53.049 "state": "online", 00:16:53.049 "raid_level": "raid1", 00:16:53.049 "superblock": true, 00:16:53.049 "num_base_bdevs": 2, 00:16:53.049 "num_base_bdevs_discovered": 1, 00:16:53.049 "num_base_bdevs_operational": 1, 00:16:53.049 "base_bdevs_list": [ 00:16:53.049 { 00:16:53.049 "name": null, 00:16:53.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.049 "is_configured": false, 00:16:53.049 "data_offset": 0, 00:16:53.049 "data_size": 63488 00:16:53.049 }, 00:16:53.049 { 00:16:53.049 "name": "BaseBdev2", 00:16:53.049 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:53.049 "is_configured": true, 00:16:53.049 "data_offset": 2048, 00:16:53.049 "data_size": 63488 00:16:53.049 } 00:16:53.049 ] 00:16:53.049 }' 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.049 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.309 [2024-11-20 07:13:35.354674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.309 [2024-11-20 07:13:35.354885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.309 [2024-11-20 07:13:35.354904] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:53.309 request: 00:16:53.309 { 00:16:53.309 "base_bdev": "BaseBdev1", 00:16:53.309 "raid_bdev": "raid_bdev1", 00:16:53.309 "method": "bdev_raid_add_base_bdev", 00:16:53.309 "req_id": 1 00:16:53.309 } 00:16:53.309 Got JSON-RPC error response 00:16:53.309 response: 00:16:53.309 { 00:16:53.309 "code": -22, 00:16:53.309 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:53.309 } 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.309 07:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.248 "name": "raid_bdev1", 00:16:54.248 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:54.248 "strip_size_kb": 0, 00:16:54.248 "state": "online", 00:16:54.248 "raid_level": "raid1", 00:16:54.248 "superblock": true, 00:16:54.248 "num_base_bdevs": 2, 00:16:54.248 "num_base_bdevs_discovered": 1, 00:16:54.248 "num_base_bdevs_operational": 1, 00:16:54.248 "base_bdevs_list": [ 00:16:54.248 { 00:16:54.248 "name": null, 00:16:54.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.248 "is_configured": false, 00:16:54.248 "data_offset": 0, 00:16:54.248 "data_size": 63488 00:16:54.248 }, 00:16:54.248 { 00:16:54.248 "name": "BaseBdev2", 00:16:54.248 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:54.248 "is_configured": true, 00:16:54.248 "data_offset": 2048, 00:16:54.248 "data_size": 63488 00:16:54.248 } 00:16:54.248 ] 00:16:54.248 }' 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.248 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.817 "name": "raid_bdev1", 00:16:54.817 "uuid": "3d06ebeb-bdcc-4baa-8756-94cd92d5e368", 00:16:54.817 "strip_size_kb": 0, 00:16:54.817 "state": "online", 00:16:54.817 "raid_level": "raid1", 00:16:54.817 "superblock": true, 00:16:54.817 "num_base_bdevs": 2, 00:16:54.817 "num_base_bdevs_discovered": 1, 00:16:54.817 "num_base_bdevs_operational": 1, 00:16:54.817 "base_bdevs_list": [ 00:16:54.817 { 00:16:54.817 "name": null, 00:16:54.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.817 "is_configured": false, 00:16:54.817 "data_offset": 0, 00:16:54.817 "data_size": 63488 00:16:54.817 }, 00:16:54.817 { 00:16:54.817 "name": "BaseBdev2", 00:16:54.817 "uuid": "e54681c5-bb52-5908-848b-6cc8a14b3044", 00:16:54.817 "is_configured": true, 00:16:54.817 "data_offset": 2048, 00:16:54.817 "data_size": 63488 00:16:54.817 } 00:16:54.817 ] 00:16:54.817 }' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77258 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77258 ']' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77258 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77258 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.817 killing process with pid 77258 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77258' 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77258 00:16:54.817 Received shutdown signal, test time was about 18.690524 seconds 00:16:54.817 00:16:54.817 Latency(us) 00:16:54.817 [2024-11-20T07:13:37.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.817 [2024-11-20T07:13:37.082Z] =================================================================================================================== 00:16:54.817 [2024-11-20T07:13:37.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.817 [2024-11-20 07:13:36.991113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.817 07:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77258 00:16:54.818 [2024-11-20 07:13:36.991295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.818 [2024-11-20 07:13:36.991383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.818 [2024-11-20 07:13:36.991399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:55.077 [2024-11-20 07:13:37.271990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:56.458 00:16:56.458 real 0m22.291s 00:16:56.458 user 0m28.860s 00:16:56.458 sys 0m2.602s 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.458 ************************************ 00:16:56.458 END TEST raid_rebuild_test_sb_io 00:16:56.458 ************************************ 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 07:13:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:56.458 07:13:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:56.458 07:13:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:56.458 07:13:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.458 07:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 ************************************ 00:16:56.458 START TEST raid_rebuild_test 00:16:56.458 ************************************ 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77979 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77979 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77979 ']' 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.458 07:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 [2024-11-20 07:13:38.759234] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:56.718 [2024-11-20 07:13:38.759386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77979 ] 00:16:56.718 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:56.718 Zero copy mechanism will not be used. 00:16:56.718 [2024-11-20 07:13:38.919729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.978 [2024-11-20 07:13:39.038700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.237 [2024-11-20 07:13:39.244888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.237 [2024-11-20 07:13:39.244965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.497 BaseBdev1_malloc 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.497 [2024-11-20 07:13:39.684658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.497 [2024-11-20 07:13:39.684737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.497 [2024-11-20 07:13:39.684764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.497 [2024-11-20 07:13:39.684775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.497 [2024-11-20 07:13:39.686890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.497 [2024-11-20 07:13:39.686931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.497 BaseBdev1 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.497 BaseBdev2_malloc 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.497 [2024-11-20 07:13:39.733144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:57.497 [2024-11-20 07:13:39.733221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.497 [2024-11-20 07:13:39.733243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.497 [2024-11-20 07:13:39.733255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.497 [2024-11-20 07:13:39.735316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.497 [2024-11-20 07:13:39.735369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:57.497 BaseBdev2 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.497 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:57.498 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.498 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 BaseBdev3_malloc 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 [2024-11-20 07:13:39.794265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:57.758 [2024-11-20 07:13:39.794353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.758 [2024-11-20 07:13:39.794378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.758 [2024-11-20 07:13:39.794391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.758 [2024-11-20 07:13:39.796530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.758 [2024-11-20 07:13:39.796575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:57.758 BaseBdev3 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 BaseBdev4_malloc 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 [2024-11-20 07:13:39.844272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:57.758 [2024-11-20 07:13:39.844351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.758 [2024-11-20 07:13:39.844373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.758 [2024-11-20 07:13:39.844384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.758 [2024-11-20 07:13:39.846528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.758 [2024-11-20 07:13:39.846579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:57.758 BaseBdev4 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 spare_malloc 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 spare_delay 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 [2024-11-20 07:13:39.903982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.758 [2024-11-20 07:13:39.904057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.758 [2024-11-20 07:13:39.904092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:57.758 [2024-11-20 07:13:39.904103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.758 [2024-11-20 07:13:39.906368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.758 [2024-11-20 07:13:39.906407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.758 spare 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.758 [2024-11-20 07:13:39.912011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.758 [2024-11-20 07:13:39.913947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.758 [2024-11-20 07:13:39.914027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.758 [2024-11-20 07:13:39.914089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.758 [2024-11-20 07:13:39.914190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.758 [2024-11-20 07:13:39.914210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:57.758 [2024-11-20 07:13:39.914525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:57.758 [2024-11-20 07:13:39.914749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.758 [2024-11-20 07:13:39.914770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.758 [2024-11-20 07:13:39.914946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.758 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.759 "name": "raid_bdev1", 00:16:57.759 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:16:57.759 "strip_size_kb": 0, 00:16:57.759 "state": "online", 00:16:57.759 "raid_level": "raid1", 00:16:57.759 "superblock": false, 00:16:57.759 "num_base_bdevs": 4, 00:16:57.759 "num_base_bdevs_discovered": 4, 00:16:57.759 "num_base_bdevs_operational": 4, 00:16:57.759 "base_bdevs_list": [ 00:16:57.759 { 00:16:57.759 "name": "BaseBdev1", 00:16:57.759 "uuid": "0886a3d8-05e3-5ba9-9bb7-1f5e43ee1799", 00:16:57.759 "is_configured": true, 00:16:57.759 "data_offset": 0, 00:16:57.759 "data_size": 65536 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": "BaseBdev2", 00:16:57.759 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:16:57.759 "is_configured": true, 00:16:57.759 "data_offset": 0, 00:16:57.759 "data_size": 65536 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": "BaseBdev3", 00:16:57.759 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:16:57.759 "is_configured": true, 00:16:57.759 "data_offset": 0, 00:16:57.759 "data_size": 65536 00:16:57.759 }, 00:16:57.759 { 00:16:57.759 "name": "BaseBdev4", 00:16:57.759 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:16:57.759 "is_configured": true, 00:16:57.759 "data_offset": 0, 00:16:57.759 "data_size": 65536 00:16:57.759 } 00:16:57.759 ] 00:16:57.759 }' 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.759 07:13:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.327 [2024-11-20 07:13:40.319776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.327 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.328 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:58.587 [2024-11-20 07:13:40.634920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:58.587 /dev/nbd0 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.587 1+0 records in 00:16:58.587 1+0 records out 00:16:58.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294214 s, 13.9 MB/s 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:58.587 07:13:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:05.188 65536+0 records in 00:17:05.188 65536+0 records out 00:17:05.188 33554432 bytes (34 MB, 32 MiB) copied, 6.6269 s, 5.1 MB/s 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.188 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.447 [2024-11-20 07:13:47.593277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.447 [2024-11-20 07:13:47.634600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.447 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.448 "name": "raid_bdev1", 00:17:05.448 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:05.448 "strip_size_kb": 0, 00:17:05.448 "state": "online", 00:17:05.448 "raid_level": "raid1", 00:17:05.448 "superblock": false, 00:17:05.448 "num_base_bdevs": 4, 00:17:05.448 "num_base_bdevs_discovered": 3, 00:17:05.448 "num_base_bdevs_operational": 3, 00:17:05.448 "base_bdevs_list": [ 00:17:05.448 { 00:17:05.448 "name": null, 00:17:05.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.448 "is_configured": false, 00:17:05.448 "data_offset": 0, 00:17:05.448 "data_size": 65536 00:17:05.448 }, 00:17:05.448 { 00:17:05.448 "name": "BaseBdev2", 00:17:05.448 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:17:05.448 "is_configured": true, 00:17:05.448 "data_offset": 0, 00:17:05.448 "data_size": 65536 00:17:05.448 }, 00:17:05.448 { 00:17:05.448 "name": "BaseBdev3", 00:17:05.448 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:05.448 "is_configured": true, 00:17:05.448 "data_offset": 0, 00:17:05.448 "data_size": 65536 00:17:05.448 }, 00:17:05.448 { 00:17:05.448 "name": "BaseBdev4", 00:17:05.448 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:05.448 "is_configured": true, 00:17:05.448 "data_offset": 0, 00:17:05.448 "data_size": 65536 00:17:05.448 } 00:17:05.448 ] 00:17:05.448 }' 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.448 07:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.014 07:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.014 07:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.014 07:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.014 [2024-11-20 07:13:48.113790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.014 [2024-11-20 07:13:48.130651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:06.014 07:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.014 07:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.014 [2024-11-20 07:13:48.132716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.958 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.958 "name": "raid_bdev1", 00:17:06.958 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:06.958 "strip_size_kb": 0, 00:17:06.958 "state": "online", 00:17:06.958 "raid_level": "raid1", 00:17:06.958 "superblock": false, 00:17:06.958 "num_base_bdevs": 4, 00:17:06.958 "num_base_bdevs_discovered": 4, 00:17:06.958 "num_base_bdevs_operational": 4, 00:17:06.958 "process": { 00:17:06.958 "type": "rebuild", 00:17:06.958 "target": "spare", 00:17:06.958 "progress": { 00:17:06.958 "blocks": 20480, 00:17:06.958 "percent": 31 00:17:06.958 } 00:17:06.958 }, 00:17:06.958 "base_bdevs_list": [ 00:17:06.958 { 00:17:06.958 "name": "spare", 00:17:06.958 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:06.958 "is_configured": true, 00:17:06.958 "data_offset": 0, 00:17:06.958 "data_size": 65536 00:17:06.958 }, 00:17:06.958 { 00:17:06.958 "name": "BaseBdev2", 00:17:06.958 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:17:06.958 "is_configured": true, 00:17:06.958 "data_offset": 0, 00:17:06.958 "data_size": 65536 00:17:06.958 }, 00:17:06.958 { 00:17:06.958 "name": "BaseBdev3", 00:17:06.958 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:06.958 "is_configured": true, 00:17:06.958 "data_offset": 0, 00:17:06.958 "data_size": 65536 00:17:06.958 }, 00:17:06.958 { 00:17:06.958 "name": "BaseBdev4", 00:17:06.958 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:06.958 "is_configured": true, 00:17:06.958 "data_offset": 0, 00:17:06.958 "data_size": 65536 00:17:06.959 } 00:17:06.959 ] 00:17:06.959 }' 00:17:06.959 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.230 [2024-11-20 07:13:49.279849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.230 [2024-11-20 07:13:49.338952] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.230 [2024-11-20 07:13:49.339061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.230 [2024-11-20 07:13:49.339078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.230 [2024-11-20 07:13:49.339088] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.230 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.231 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.231 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.231 "name": "raid_bdev1", 00:17:07.231 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:07.231 "strip_size_kb": 0, 00:17:07.231 "state": "online", 00:17:07.231 "raid_level": "raid1", 00:17:07.231 "superblock": false, 00:17:07.231 "num_base_bdevs": 4, 00:17:07.231 "num_base_bdevs_discovered": 3, 00:17:07.231 "num_base_bdevs_operational": 3, 00:17:07.231 "base_bdevs_list": [ 00:17:07.231 { 00:17:07.231 "name": null, 00:17:07.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.231 "is_configured": false, 00:17:07.231 "data_offset": 0, 00:17:07.231 "data_size": 65536 00:17:07.231 }, 00:17:07.231 { 00:17:07.231 "name": "BaseBdev2", 00:17:07.231 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:17:07.231 "is_configured": true, 00:17:07.231 "data_offset": 0, 00:17:07.231 "data_size": 65536 00:17:07.231 }, 00:17:07.231 { 00:17:07.231 "name": "BaseBdev3", 00:17:07.231 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:07.231 "is_configured": true, 00:17:07.231 "data_offset": 0, 00:17:07.231 "data_size": 65536 00:17:07.231 }, 00:17:07.231 { 00:17:07.231 "name": "BaseBdev4", 00:17:07.231 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:07.231 "is_configured": true, 00:17:07.231 "data_offset": 0, 00:17:07.231 "data_size": 65536 00:17:07.231 } 00:17:07.231 ] 00:17:07.231 }' 00:17:07.231 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.231 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.801 "name": "raid_bdev1", 00:17:07.801 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:07.801 "strip_size_kb": 0, 00:17:07.801 "state": "online", 00:17:07.801 "raid_level": "raid1", 00:17:07.801 "superblock": false, 00:17:07.801 "num_base_bdevs": 4, 00:17:07.801 "num_base_bdevs_discovered": 3, 00:17:07.801 "num_base_bdevs_operational": 3, 00:17:07.801 "base_bdevs_list": [ 00:17:07.801 { 00:17:07.801 "name": null, 00:17:07.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.801 "is_configured": false, 00:17:07.801 "data_offset": 0, 00:17:07.801 "data_size": 65536 00:17:07.801 }, 00:17:07.801 { 00:17:07.801 "name": "BaseBdev2", 00:17:07.801 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:17:07.801 "is_configured": true, 00:17:07.801 "data_offset": 0, 00:17:07.801 "data_size": 65536 00:17:07.801 }, 00:17:07.801 { 00:17:07.801 "name": "BaseBdev3", 00:17:07.801 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:07.801 "is_configured": true, 00:17:07.801 "data_offset": 0, 00:17:07.801 "data_size": 65536 00:17:07.801 }, 00:17:07.801 { 00:17:07.801 "name": "BaseBdev4", 00:17:07.801 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:07.801 "is_configured": true, 00:17:07.801 "data_offset": 0, 00:17:07.801 "data_size": 65536 00:17:07.801 } 00:17:07.801 ] 00:17:07.801 }' 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.801 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 [2024-11-20 07:13:49.907834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.802 [2024-11-20 07:13:49.923218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:07.802 07:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.802 07:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:07.802 [2024-11-20 07:13:49.925180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.741 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.741 "name": "raid_bdev1", 00:17:08.741 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:08.741 "strip_size_kb": 0, 00:17:08.741 "state": "online", 00:17:08.741 "raid_level": "raid1", 00:17:08.741 "superblock": false, 00:17:08.741 "num_base_bdevs": 4, 00:17:08.741 "num_base_bdevs_discovered": 4, 00:17:08.741 "num_base_bdevs_operational": 4, 00:17:08.741 "process": { 00:17:08.741 "type": "rebuild", 00:17:08.741 "target": "spare", 00:17:08.741 "progress": { 00:17:08.741 "blocks": 20480, 00:17:08.741 "percent": 31 00:17:08.741 } 00:17:08.741 }, 00:17:08.741 "base_bdevs_list": [ 00:17:08.741 { 00:17:08.741 "name": "spare", 00:17:08.742 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:08.742 "is_configured": true, 00:17:08.742 "data_offset": 0, 00:17:08.742 "data_size": 65536 00:17:08.742 }, 00:17:08.742 { 00:17:08.742 "name": "BaseBdev2", 00:17:08.742 "uuid": "f8ea3f5d-d495-5677-a956-807e6e815ddf", 00:17:08.742 "is_configured": true, 00:17:08.742 "data_offset": 0, 00:17:08.742 "data_size": 65536 00:17:08.742 }, 00:17:08.742 { 00:17:08.742 "name": "BaseBdev3", 00:17:08.742 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:08.742 "is_configured": true, 00:17:08.742 "data_offset": 0, 00:17:08.742 "data_size": 65536 00:17:08.742 }, 00:17:08.742 { 00:17:08.742 "name": "BaseBdev4", 00:17:08.742 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:08.742 "is_configured": true, 00:17:08.742 "data_offset": 0, 00:17:08.742 "data_size": 65536 00:17:08.742 } 00:17:08.742 ] 00:17:08.742 }' 00:17:08.742 07:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.002 [2024-11-20 07:13:51.096638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.002 [2024-11-20 07:13:51.131045] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.002 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.002 "name": "raid_bdev1", 00:17:09.002 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:09.003 "strip_size_kb": 0, 00:17:09.003 "state": "online", 00:17:09.003 "raid_level": "raid1", 00:17:09.003 "superblock": false, 00:17:09.003 "num_base_bdevs": 4, 00:17:09.003 "num_base_bdevs_discovered": 3, 00:17:09.003 "num_base_bdevs_operational": 3, 00:17:09.003 "process": { 00:17:09.003 "type": "rebuild", 00:17:09.003 "target": "spare", 00:17:09.003 "progress": { 00:17:09.003 "blocks": 24576, 00:17:09.003 "percent": 37 00:17:09.003 } 00:17:09.003 }, 00:17:09.003 "base_bdevs_list": [ 00:17:09.003 { 00:17:09.003 "name": "spare", 00:17:09.003 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:09.003 "is_configured": true, 00:17:09.003 "data_offset": 0, 00:17:09.003 "data_size": 65536 00:17:09.003 }, 00:17:09.003 { 00:17:09.003 "name": null, 00:17:09.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.003 "is_configured": false, 00:17:09.003 "data_offset": 0, 00:17:09.003 "data_size": 65536 00:17:09.003 }, 00:17:09.003 { 00:17:09.003 "name": "BaseBdev3", 00:17:09.003 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:09.003 "is_configured": true, 00:17:09.003 "data_offset": 0, 00:17:09.003 "data_size": 65536 00:17:09.003 }, 00:17:09.003 { 00:17:09.003 "name": "BaseBdev4", 00:17:09.003 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:09.003 "is_configured": true, 00:17:09.003 "data_offset": 0, 00:17:09.003 "data_size": 65536 00:17:09.003 } 00:17:09.003 ] 00:17:09.003 }' 00:17:09.003 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.003 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.003 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=467 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.263 "name": "raid_bdev1", 00:17:09.263 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:09.263 "strip_size_kb": 0, 00:17:09.263 "state": "online", 00:17:09.263 "raid_level": "raid1", 00:17:09.263 "superblock": false, 00:17:09.263 "num_base_bdevs": 4, 00:17:09.263 "num_base_bdevs_discovered": 3, 00:17:09.263 "num_base_bdevs_operational": 3, 00:17:09.263 "process": { 00:17:09.263 "type": "rebuild", 00:17:09.263 "target": "spare", 00:17:09.263 "progress": { 00:17:09.263 "blocks": 26624, 00:17:09.263 "percent": 40 00:17:09.263 } 00:17:09.263 }, 00:17:09.263 "base_bdevs_list": [ 00:17:09.263 { 00:17:09.263 "name": "spare", 00:17:09.263 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:09.263 "is_configured": true, 00:17:09.263 "data_offset": 0, 00:17:09.263 "data_size": 65536 00:17:09.263 }, 00:17:09.263 { 00:17:09.263 "name": null, 00:17:09.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.263 "is_configured": false, 00:17:09.263 "data_offset": 0, 00:17:09.263 "data_size": 65536 00:17:09.263 }, 00:17:09.263 { 00:17:09.263 "name": "BaseBdev3", 00:17:09.263 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:09.263 "is_configured": true, 00:17:09.263 "data_offset": 0, 00:17:09.263 "data_size": 65536 00:17:09.263 }, 00:17:09.263 { 00:17:09.263 "name": "BaseBdev4", 00:17:09.263 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:09.263 "is_configured": true, 00:17:09.263 "data_offset": 0, 00:17:09.263 "data_size": 65536 00:17:09.263 } 00:17:09.263 ] 00:17:09.263 }' 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.263 07:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.202 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.203 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.203 07:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.203 07:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.203 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.463 "name": "raid_bdev1", 00:17:10.463 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:10.463 "strip_size_kb": 0, 00:17:10.463 "state": "online", 00:17:10.463 "raid_level": "raid1", 00:17:10.463 "superblock": false, 00:17:10.463 "num_base_bdevs": 4, 00:17:10.463 "num_base_bdevs_discovered": 3, 00:17:10.463 "num_base_bdevs_operational": 3, 00:17:10.463 "process": { 00:17:10.463 "type": "rebuild", 00:17:10.463 "target": "spare", 00:17:10.463 "progress": { 00:17:10.463 "blocks": 51200, 00:17:10.463 "percent": 78 00:17:10.463 } 00:17:10.463 }, 00:17:10.463 "base_bdevs_list": [ 00:17:10.463 { 00:17:10.463 "name": "spare", 00:17:10.463 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:10.463 "is_configured": true, 00:17:10.463 "data_offset": 0, 00:17:10.463 "data_size": 65536 00:17:10.463 }, 00:17:10.463 { 00:17:10.463 "name": null, 00:17:10.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.463 "is_configured": false, 00:17:10.463 "data_offset": 0, 00:17:10.463 "data_size": 65536 00:17:10.463 }, 00:17:10.463 { 00:17:10.463 "name": "BaseBdev3", 00:17:10.463 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:10.463 "is_configured": true, 00:17:10.463 "data_offset": 0, 00:17:10.463 "data_size": 65536 00:17:10.463 }, 00:17:10.463 { 00:17:10.463 "name": "BaseBdev4", 00:17:10.463 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:10.463 "is_configured": true, 00:17:10.463 "data_offset": 0, 00:17:10.463 "data_size": 65536 00:17:10.463 } 00:17:10.463 ] 00:17:10.463 }' 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.463 07:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.033 [2024-11-20 07:13:53.140871] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.033 [2024-11-20 07:13:53.141094] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.033 [2024-11-20 07:13:53.141190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.601 "name": "raid_bdev1", 00:17:11.601 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:11.601 "strip_size_kb": 0, 00:17:11.601 "state": "online", 00:17:11.601 "raid_level": "raid1", 00:17:11.601 "superblock": false, 00:17:11.601 "num_base_bdevs": 4, 00:17:11.601 "num_base_bdevs_discovered": 3, 00:17:11.601 "num_base_bdevs_operational": 3, 00:17:11.601 "base_bdevs_list": [ 00:17:11.601 { 00:17:11.601 "name": "spare", 00:17:11.601 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:11.601 "is_configured": true, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": null, 00:17:11.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.601 "is_configured": false, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": "BaseBdev3", 00:17:11.601 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:11.601 "is_configured": true, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": "BaseBdev4", 00:17:11.601 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:11.601 "is_configured": true, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 } 00:17:11.601 ] 00:17:11.601 }' 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.601 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.601 "name": "raid_bdev1", 00:17:11.601 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:11.601 "strip_size_kb": 0, 00:17:11.601 "state": "online", 00:17:11.601 "raid_level": "raid1", 00:17:11.601 "superblock": false, 00:17:11.601 "num_base_bdevs": 4, 00:17:11.601 "num_base_bdevs_discovered": 3, 00:17:11.601 "num_base_bdevs_operational": 3, 00:17:11.601 "base_bdevs_list": [ 00:17:11.601 { 00:17:11.601 "name": "spare", 00:17:11.601 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:11.601 "is_configured": true, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": null, 00:17:11.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.601 "is_configured": false, 00:17:11.601 "data_offset": 0, 00:17:11.601 "data_size": 65536 00:17:11.601 }, 00:17:11.601 { 00:17:11.601 "name": "BaseBdev3", 00:17:11.602 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:11.602 "is_configured": true, 00:17:11.602 "data_offset": 0, 00:17:11.602 "data_size": 65536 00:17:11.602 }, 00:17:11.602 { 00:17:11.602 "name": "BaseBdev4", 00:17:11.602 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:11.602 "is_configured": true, 00:17:11.602 "data_offset": 0, 00:17:11.602 "data_size": 65536 00:17:11.602 } 00:17:11.602 ] 00:17:11.602 }' 00:17:11.602 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.602 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.602 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.861 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.862 "name": "raid_bdev1", 00:17:11.862 "uuid": "58f9cdf4-4cbc-4f8c-b24f-cde50d1ce81b", 00:17:11.862 "strip_size_kb": 0, 00:17:11.862 "state": "online", 00:17:11.862 "raid_level": "raid1", 00:17:11.862 "superblock": false, 00:17:11.862 "num_base_bdevs": 4, 00:17:11.862 "num_base_bdevs_discovered": 3, 00:17:11.862 "num_base_bdevs_operational": 3, 00:17:11.862 "base_bdevs_list": [ 00:17:11.862 { 00:17:11.862 "name": "spare", 00:17:11.862 "uuid": "01632498-621c-5bec-9361-af02f846d016", 00:17:11.862 "is_configured": true, 00:17:11.862 "data_offset": 0, 00:17:11.862 "data_size": 65536 00:17:11.862 }, 00:17:11.862 { 00:17:11.862 "name": null, 00:17:11.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.862 "is_configured": false, 00:17:11.862 "data_offset": 0, 00:17:11.862 "data_size": 65536 00:17:11.862 }, 00:17:11.862 { 00:17:11.862 "name": "BaseBdev3", 00:17:11.862 "uuid": "d796e437-66a9-5801-9d0e-2feef8ba6486", 00:17:11.862 "is_configured": true, 00:17:11.862 "data_offset": 0, 00:17:11.862 "data_size": 65536 00:17:11.862 }, 00:17:11.862 { 00:17:11.862 "name": "BaseBdev4", 00:17:11.862 "uuid": "03a4b216-8342-5266-afb9-bc110211ab06", 00:17:11.862 "is_configured": true, 00:17:11.862 "data_offset": 0, 00:17:11.862 "data_size": 65536 00:17:11.862 } 00:17:11.862 ] 00:17:11.862 }' 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.862 07:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.120 [2024-11-20 07:13:54.373915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.120 [2024-11-20 07:13:54.374060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.120 [2024-11-20 07:13:54.374180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.120 [2024-11-20 07:13:54.374300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.120 [2024-11-20 07:13:54.374379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.120 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.379 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:12.639 /dev/nbd0 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.639 1+0 records in 00:17:12.639 1+0 records out 00:17:12.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035067 s, 11.7 MB/s 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.639 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:12.898 /dev/nbd1 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.899 1+0 records in 00:17:12.899 1+0 records out 00:17:12.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406502 s, 10.1 MB/s 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.899 07:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.158 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.418 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77979 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77979 ']' 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77979 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77979 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77979' 00:17:13.678 killing process with pid 77979 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77979 00:17:13.678 Received shutdown signal, test time was about 60.000000 seconds 00:17:13.678 00:17:13.678 Latency(us) 00:17:13.678 [2024-11-20T07:13:55.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.678 [2024-11-20T07:13:55.943Z] =================================================================================================================== 00:17:13.678 [2024-11-20T07:13:55.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.678 [2024-11-20 07:13:55.732710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.678 07:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77979 00:17:14.247 [2024-11-20 07:13:56.305746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:15.627 ************************************ 00:17:15.627 END TEST raid_rebuild_test 00:17:15.627 ************************************ 00:17:15.627 00:17:15.627 real 0m18.923s 00:17:15.627 user 0m20.610s 00:17:15.627 sys 0m3.405s 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.627 07:13:57 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:15.627 07:13:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:15.627 07:13:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.627 07:13:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.627 ************************************ 00:17:15.627 START TEST raid_rebuild_test_sb 00:17:15.627 ************************************ 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:15.627 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78431 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78431 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78431 ']' 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.628 07:13:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.628 [2024-11-20 07:13:57.778036] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:15.628 [2024-11-20 07:13:57.778314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78431 ] 00:17:15.628 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.628 Zero copy mechanism will not be used. 00:17:15.888 [2024-11-20 07:13:57.960848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.888 [2024-11-20 07:13:58.098073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.147 [2024-11-20 07:13:58.335947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.147 [2024-11-20 07:13:58.336112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.414 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 BaseBdev1_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 [2024-11-20 07:13:58.724995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:16.695 [2024-11-20 07:13:58.725200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.695 [2024-11-20 07:13:58.725257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:16.695 [2024-11-20 07:13:58.725320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.695 [2024-11-20 07:13:58.727894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.695 [2024-11-20 07:13:58.727998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:16.695 BaseBdev1 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 BaseBdev2_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 [2024-11-20 07:13:58.785966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:16.695 [2024-11-20 07:13:58.786160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.695 [2024-11-20 07:13:58.786217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.695 [2024-11-20 07:13:58.786260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.695 [2024-11-20 07:13:58.788798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.695 [2024-11-20 07:13:58.788909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:16.695 BaseBdev2 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 BaseBdev3_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 [2024-11-20 07:13:58.858147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:16.695 [2024-11-20 07:13:58.858326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.695 [2024-11-20 07:13:58.858391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:16.695 [2024-11-20 07:13:58.858442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.695 [2024-11-20 07:13:58.860811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.695 [2024-11-20 07:13:58.860929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:16.695 BaseBdev3 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 BaseBdev4_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.695 [2024-11-20 07:13:58.919787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:16.695 [2024-11-20 07:13:58.919961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.695 [2024-11-20 07:13:58.919990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:16.695 [2024-11-20 07:13:58.920004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.695 [2024-11-20 07:13:58.922634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.695 [2024-11-20 07:13:58.922692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:16.695 BaseBdev4 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.695 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.955 spare_malloc 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.955 spare_delay 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.955 [2024-11-20 07:13:58.994029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.955 [2024-11-20 07:13:58.994227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.955 [2024-11-20 07:13:58.994273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:16.955 [2024-11-20 07:13:58.994315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.955 [2024-11-20 07:13:58.996796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.955 [2024-11-20 07:13:58.996909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.955 spare 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:16.955 07:13:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.955 [2024-11-20 07:13:59.006088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.955 [2024-11-20 07:13:59.008363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.955 [2024-11-20 07:13:59.008491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:16.955 [2024-11-20 07:13:59.008590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:16.955 [2024-11-20 07:13:59.008839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:16.955 [2024-11-20 07:13:59.008908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.955 [2024-11-20 07:13:59.009278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:16.955 [2024-11-20 07:13:59.009606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:16.955 [2024-11-20 07:13:59.009664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:16.955 [2024-11-20 07:13:59.009976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.955 "name": "raid_bdev1", 00:17:16.955 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:16.955 "strip_size_kb": 0, 00:17:16.955 "state": "online", 00:17:16.955 "raid_level": "raid1", 00:17:16.955 "superblock": true, 00:17:16.955 "num_base_bdevs": 4, 00:17:16.955 "num_base_bdevs_discovered": 4, 00:17:16.955 "num_base_bdevs_operational": 4, 00:17:16.955 "base_bdevs_list": [ 00:17:16.955 { 00:17:16.955 "name": "BaseBdev1", 00:17:16.955 "uuid": "9c74055e-e9ad-5333-93f8-e280180a971f", 00:17:16.955 "is_configured": true, 00:17:16.955 "data_offset": 2048, 00:17:16.955 "data_size": 63488 00:17:16.955 }, 00:17:16.955 { 00:17:16.955 "name": "BaseBdev2", 00:17:16.955 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:16.955 "is_configured": true, 00:17:16.955 "data_offset": 2048, 00:17:16.955 "data_size": 63488 00:17:16.955 }, 00:17:16.955 { 00:17:16.955 "name": "BaseBdev3", 00:17:16.955 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:16.955 "is_configured": true, 00:17:16.955 "data_offset": 2048, 00:17:16.955 "data_size": 63488 00:17:16.955 }, 00:17:16.955 { 00:17:16.955 "name": "BaseBdev4", 00:17:16.955 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:16.955 "is_configured": true, 00:17:16.955 "data_offset": 2048, 00:17:16.955 "data_size": 63488 00:17:16.955 } 00:17:16.955 ] 00:17:16.955 }' 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.955 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:17.523 [2024-11-20 07:13:59.509616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.523 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:17.783 [2024-11-20 07:13:59.844773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:17.783 /dev/nbd0 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.783 1+0 records in 00:17:17.783 1+0 records out 00:17:17.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486901 s, 8.4 MB/s 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:17.783 07:13:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:25.920 63488+0 records in 00:17:25.920 63488+0 records out 00:17:25.920 32505856 bytes (33 MB, 31 MiB) copied, 6.74713 s, 4.8 MB/s 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.920 [2024-11-20 07:14:06.922533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.920 [2024-11-20 07:14:06.966597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.920 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.921 07:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.921 "name": "raid_bdev1", 00:17:25.921 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:25.921 "strip_size_kb": 0, 00:17:25.921 "state": "online", 00:17:25.921 "raid_level": "raid1", 00:17:25.921 "superblock": true, 00:17:25.921 "num_base_bdevs": 4, 00:17:25.921 "num_base_bdevs_discovered": 3, 00:17:25.921 "num_base_bdevs_operational": 3, 00:17:25.921 "base_bdevs_list": [ 00:17:25.921 { 00:17:25.921 "name": null, 00:17:25.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.921 "is_configured": false, 00:17:25.921 "data_offset": 0, 00:17:25.921 "data_size": 63488 00:17:25.921 }, 00:17:25.921 { 00:17:25.921 "name": "BaseBdev2", 00:17:25.921 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:25.921 "is_configured": true, 00:17:25.921 "data_offset": 2048, 00:17:25.921 "data_size": 63488 00:17:25.921 }, 00:17:25.921 { 00:17:25.921 "name": "BaseBdev3", 00:17:25.921 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:25.921 "is_configured": true, 00:17:25.921 "data_offset": 2048, 00:17:25.921 "data_size": 63488 00:17:25.921 }, 00:17:25.921 { 00:17:25.921 "name": "BaseBdev4", 00:17:25.921 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:25.921 "is_configured": true, 00:17:25.921 "data_offset": 2048, 00:17:25.921 "data_size": 63488 00:17:25.921 } 00:17:25.921 ] 00:17:25.921 }' 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 [2024-11-20 07:14:07.449874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.921 [2024-11-20 07:14:07.468133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.921 07:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.921 [2024-11-20 07:14:07.470711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.490 "name": "raid_bdev1", 00:17:26.490 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:26.490 "strip_size_kb": 0, 00:17:26.490 "state": "online", 00:17:26.490 "raid_level": "raid1", 00:17:26.490 "superblock": true, 00:17:26.490 "num_base_bdevs": 4, 00:17:26.490 "num_base_bdevs_discovered": 4, 00:17:26.490 "num_base_bdevs_operational": 4, 00:17:26.490 "process": { 00:17:26.490 "type": "rebuild", 00:17:26.490 "target": "spare", 00:17:26.490 "progress": { 00:17:26.490 "blocks": 20480, 00:17:26.490 "percent": 32 00:17:26.490 } 00:17:26.490 }, 00:17:26.490 "base_bdevs_list": [ 00:17:26.490 { 00:17:26.490 "name": "spare", 00:17:26.490 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:26.490 "is_configured": true, 00:17:26.490 "data_offset": 2048, 00:17:26.490 "data_size": 63488 00:17:26.490 }, 00:17:26.490 { 00:17:26.490 "name": "BaseBdev2", 00:17:26.490 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:26.490 "is_configured": true, 00:17:26.490 "data_offset": 2048, 00:17:26.490 "data_size": 63488 00:17:26.490 }, 00:17:26.490 { 00:17:26.490 "name": "BaseBdev3", 00:17:26.490 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:26.490 "is_configured": true, 00:17:26.490 "data_offset": 2048, 00:17:26.490 "data_size": 63488 00:17:26.490 }, 00:17:26.490 { 00:17:26.490 "name": "BaseBdev4", 00:17:26.490 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:26.490 "is_configured": true, 00:17:26.490 "data_offset": 2048, 00:17:26.490 "data_size": 63488 00:17:26.490 } 00:17:26.490 ] 00:17:26.490 }' 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.490 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 [2024-11-20 07:14:08.629637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.491 [2024-11-20 07:14:08.677473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.491 [2024-11-20 07:14:08.677579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.491 [2024-11-20 07:14:08.677600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.491 [2024-11-20 07:14:08.677613] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.491 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.750 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.750 "name": "raid_bdev1", 00:17:26.750 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:26.750 "strip_size_kb": 0, 00:17:26.750 "state": "online", 00:17:26.750 "raid_level": "raid1", 00:17:26.750 "superblock": true, 00:17:26.750 "num_base_bdevs": 4, 00:17:26.750 "num_base_bdevs_discovered": 3, 00:17:26.750 "num_base_bdevs_operational": 3, 00:17:26.750 "base_bdevs_list": [ 00:17:26.750 { 00:17:26.750 "name": null, 00:17:26.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.750 "is_configured": false, 00:17:26.750 "data_offset": 0, 00:17:26.750 "data_size": 63488 00:17:26.750 }, 00:17:26.750 { 00:17:26.750 "name": "BaseBdev2", 00:17:26.750 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:26.750 "is_configured": true, 00:17:26.750 "data_offset": 2048, 00:17:26.750 "data_size": 63488 00:17:26.750 }, 00:17:26.750 { 00:17:26.750 "name": "BaseBdev3", 00:17:26.750 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:26.750 "is_configured": true, 00:17:26.750 "data_offset": 2048, 00:17:26.750 "data_size": 63488 00:17:26.750 }, 00:17:26.750 { 00:17:26.750 "name": "BaseBdev4", 00:17:26.750 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:26.750 "is_configured": true, 00:17:26.750 "data_offset": 2048, 00:17:26.750 "data_size": 63488 00:17:26.750 } 00:17:26.750 ] 00:17:26.750 }' 00:17:26.750 07:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.750 07:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.009 "name": "raid_bdev1", 00:17:27.009 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:27.009 "strip_size_kb": 0, 00:17:27.009 "state": "online", 00:17:27.009 "raid_level": "raid1", 00:17:27.009 "superblock": true, 00:17:27.009 "num_base_bdevs": 4, 00:17:27.009 "num_base_bdevs_discovered": 3, 00:17:27.009 "num_base_bdevs_operational": 3, 00:17:27.009 "base_bdevs_list": [ 00:17:27.009 { 00:17:27.009 "name": null, 00:17:27.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.009 "is_configured": false, 00:17:27.009 "data_offset": 0, 00:17:27.009 "data_size": 63488 00:17:27.009 }, 00:17:27.009 { 00:17:27.009 "name": "BaseBdev2", 00:17:27.009 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:27.009 "is_configured": true, 00:17:27.009 "data_offset": 2048, 00:17:27.009 "data_size": 63488 00:17:27.009 }, 00:17:27.009 { 00:17:27.009 "name": "BaseBdev3", 00:17:27.009 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:27.009 "is_configured": true, 00:17:27.009 "data_offset": 2048, 00:17:27.009 "data_size": 63488 00:17:27.009 }, 00:17:27.009 { 00:17:27.009 "name": "BaseBdev4", 00:17:27.009 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:27.009 "is_configured": true, 00:17:27.009 "data_offset": 2048, 00:17:27.009 "data_size": 63488 00:17:27.009 } 00:17:27.009 ] 00:17:27.009 }' 00:17:27.009 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.267 [2024-11-20 07:14:09.315235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.267 [2024-11-20 07:14:09.332282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.267 07:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.267 [2024-11-20 07:14:09.334751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.202 "name": "raid_bdev1", 00:17:28.202 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:28.202 "strip_size_kb": 0, 00:17:28.202 "state": "online", 00:17:28.202 "raid_level": "raid1", 00:17:28.202 "superblock": true, 00:17:28.202 "num_base_bdevs": 4, 00:17:28.202 "num_base_bdevs_discovered": 4, 00:17:28.202 "num_base_bdevs_operational": 4, 00:17:28.202 "process": { 00:17:28.202 "type": "rebuild", 00:17:28.202 "target": "spare", 00:17:28.202 "progress": { 00:17:28.202 "blocks": 20480, 00:17:28.202 "percent": 32 00:17:28.202 } 00:17:28.202 }, 00:17:28.202 "base_bdevs_list": [ 00:17:28.202 { 00:17:28.202 "name": "spare", 00:17:28.202 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:28.202 "is_configured": true, 00:17:28.202 "data_offset": 2048, 00:17:28.202 "data_size": 63488 00:17:28.202 }, 00:17:28.202 { 00:17:28.202 "name": "BaseBdev2", 00:17:28.202 "uuid": "fea33162-91a5-575c-8727-684180ab905e", 00:17:28.202 "is_configured": true, 00:17:28.202 "data_offset": 2048, 00:17:28.202 "data_size": 63488 00:17:28.202 }, 00:17:28.202 { 00:17:28.202 "name": "BaseBdev3", 00:17:28.202 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:28.202 "is_configured": true, 00:17:28.202 "data_offset": 2048, 00:17:28.202 "data_size": 63488 00:17:28.202 }, 00:17:28.202 { 00:17:28.202 "name": "BaseBdev4", 00:17:28.202 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:28.202 "is_configured": true, 00:17:28.202 "data_offset": 2048, 00:17:28.202 "data_size": 63488 00:17:28.202 } 00:17:28.202 ] 00:17:28.202 }' 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.202 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:28.461 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:28.461 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.462 [2024-11-20 07:14:10.517997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.462 [2024-11-20 07:14:10.641050] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.462 "name": "raid_bdev1", 00:17:28.462 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:28.462 "strip_size_kb": 0, 00:17:28.462 "state": "online", 00:17:28.462 "raid_level": "raid1", 00:17:28.462 "superblock": true, 00:17:28.462 "num_base_bdevs": 4, 00:17:28.462 "num_base_bdevs_discovered": 3, 00:17:28.462 "num_base_bdevs_operational": 3, 00:17:28.462 "process": { 00:17:28.462 "type": "rebuild", 00:17:28.462 "target": "spare", 00:17:28.462 "progress": { 00:17:28.462 "blocks": 24576, 00:17:28.462 "percent": 38 00:17:28.462 } 00:17:28.462 }, 00:17:28.462 "base_bdevs_list": [ 00:17:28.462 { 00:17:28.462 "name": "spare", 00:17:28.462 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:28.462 "is_configured": true, 00:17:28.462 "data_offset": 2048, 00:17:28.462 "data_size": 63488 00:17:28.462 }, 00:17:28.462 { 00:17:28.462 "name": null, 00:17:28.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.462 "is_configured": false, 00:17:28.462 "data_offset": 0, 00:17:28.462 "data_size": 63488 00:17:28.462 }, 00:17:28.462 { 00:17:28.462 "name": "BaseBdev3", 00:17:28.462 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:28.462 "is_configured": true, 00:17:28.462 "data_offset": 2048, 00:17:28.462 "data_size": 63488 00:17:28.462 }, 00:17:28.462 { 00:17:28.462 "name": "BaseBdev4", 00:17:28.462 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:28.462 "is_configured": true, 00:17:28.462 "data_offset": 2048, 00:17:28.462 "data_size": 63488 00:17:28.462 } 00:17:28.462 ] 00:17:28.462 }' 00:17:28.462 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.722 "name": "raid_bdev1", 00:17:28.722 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:28.722 "strip_size_kb": 0, 00:17:28.722 "state": "online", 00:17:28.722 "raid_level": "raid1", 00:17:28.722 "superblock": true, 00:17:28.722 "num_base_bdevs": 4, 00:17:28.722 "num_base_bdevs_discovered": 3, 00:17:28.722 "num_base_bdevs_operational": 3, 00:17:28.722 "process": { 00:17:28.722 "type": "rebuild", 00:17:28.722 "target": "spare", 00:17:28.722 "progress": { 00:17:28.722 "blocks": 26624, 00:17:28.722 "percent": 41 00:17:28.722 } 00:17:28.722 }, 00:17:28.722 "base_bdevs_list": [ 00:17:28.722 { 00:17:28.722 "name": "spare", 00:17:28.722 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:28.722 "is_configured": true, 00:17:28.722 "data_offset": 2048, 00:17:28.722 "data_size": 63488 00:17:28.722 }, 00:17:28.722 { 00:17:28.722 "name": null, 00:17:28.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.722 "is_configured": false, 00:17:28.722 "data_offset": 0, 00:17:28.722 "data_size": 63488 00:17:28.722 }, 00:17:28.722 { 00:17:28.722 "name": "BaseBdev3", 00:17:28.722 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:28.722 "is_configured": true, 00:17:28.722 "data_offset": 2048, 00:17:28.722 "data_size": 63488 00:17:28.722 }, 00:17:28.722 { 00:17:28.722 "name": "BaseBdev4", 00:17:28.722 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:28.722 "is_configured": true, 00:17:28.722 "data_offset": 2048, 00:17:28.722 "data_size": 63488 00:17:28.722 } 00:17:28.722 ] 00:17:28.722 }' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.722 07:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.102 07:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.102 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.102 "name": "raid_bdev1", 00:17:30.102 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:30.102 "strip_size_kb": 0, 00:17:30.102 "state": "online", 00:17:30.102 "raid_level": "raid1", 00:17:30.102 "superblock": true, 00:17:30.102 "num_base_bdevs": 4, 00:17:30.102 "num_base_bdevs_discovered": 3, 00:17:30.102 "num_base_bdevs_operational": 3, 00:17:30.102 "process": { 00:17:30.102 "type": "rebuild", 00:17:30.102 "target": "spare", 00:17:30.102 "progress": { 00:17:30.102 "blocks": 51200, 00:17:30.102 "percent": 80 00:17:30.102 } 00:17:30.102 }, 00:17:30.102 "base_bdevs_list": [ 00:17:30.102 { 00:17:30.102 "name": "spare", 00:17:30.102 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:30.102 "is_configured": true, 00:17:30.102 "data_offset": 2048, 00:17:30.102 "data_size": 63488 00:17:30.102 }, 00:17:30.102 { 00:17:30.102 "name": null, 00:17:30.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.102 "is_configured": false, 00:17:30.102 "data_offset": 0, 00:17:30.102 "data_size": 63488 00:17:30.102 }, 00:17:30.102 { 00:17:30.102 "name": "BaseBdev3", 00:17:30.102 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:30.102 "is_configured": true, 00:17:30.102 "data_offset": 2048, 00:17:30.102 "data_size": 63488 00:17:30.102 }, 00:17:30.102 { 00:17:30.102 "name": "BaseBdev4", 00:17:30.103 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:30.103 "is_configured": true, 00:17:30.103 "data_offset": 2048, 00:17:30.103 "data_size": 63488 00:17:30.103 } 00:17:30.103 ] 00:17:30.103 }' 00:17:30.103 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.103 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.103 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.103 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.103 07:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.363 [2024-11-20 07:14:12.551285] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:30.363 [2024-11-20 07:14:12.551525] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:30.363 [2024-11-20 07:14:12.551698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.935 "name": "raid_bdev1", 00:17:30.935 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:30.935 "strip_size_kb": 0, 00:17:30.935 "state": "online", 00:17:30.935 "raid_level": "raid1", 00:17:30.935 "superblock": true, 00:17:30.935 "num_base_bdevs": 4, 00:17:30.935 "num_base_bdevs_discovered": 3, 00:17:30.935 "num_base_bdevs_operational": 3, 00:17:30.935 "base_bdevs_list": [ 00:17:30.935 { 00:17:30.935 "name": "spare", 00:17:30.935 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:30.935 "is_configured": true, 00:17:30.935 "data_offset": 2048, 00:17:30.935 "data_size": 63488 00:17:30.935 }, 00:17:30.935 { 00:17:30.935 "name": null, 00:17:30.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.935 "is_configured": false, 00:17:30.935 "data_offset": 0, 00:17:30.935 "data_size": 63488 00:17:30.935 }, 00:17:30.935 { 00:17:30.935 "name": "BaseBdev3", 00:17:30.935 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:30.935 "is_configured": true, 00:17:30.935 "data_offset": 2048, 00:17:30.935 "data_size": 63488 00:17:30.935 }, 00:17:30.935 { 00:17:30.935 "name": "BaseBdev4", 00:17:30.935 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:30.935 "is_configured": true, 00:17:30.935 "data_offset": 2048, 00:17:30.935 "data_size": 63488 00:17:30.935 } 00:17:30.935 ] 00:17:30.935 }' 00:17:30.935 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.196 "name": "raid_bdev1", 00:17:31.196 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:31.196 "strip_size_kb": 0, 00:17:31.196 "state": "online", 00:17:31.196 "raid_level": "raid1", 00:17:31.196 "superblock": true, 00:17:31.196 "num_base_bdevs": 4, 00:17:31.196 "num_base_bdevs_discovered": 3, 00:17:31.196 "num_base_bdevs_operational": 3, 00:17:31.196 "base_bdevs_list": [ 00:17:31.196 { 00:17:31.196 "name": "spare", 00:17:31.196 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": null, 00:17:31.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.196 "is_configured": false, 00:17:31.196 "data_offset": 0, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": "BaseBdev3", 00:17:31.196 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": "BaseBdev4", 00:17:31.196 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 } 00:17:31.196 ] 00:17:31.196 }' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.196 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.196 "name": "raid_bdev1", 00:17:31.196 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:31.196 "strip_size_kb": 0, 00:17:31.196 "state": "online", 00:17:31.196 "raid_level": "raid1", 00:17:31.196 "superblock": true, 00:17:31.196 "num_base_bdevs": 4, 00:17:31.196 "num_base_bdevs_discovered": 3, 00:17:31.196 "num_base_bdevs_operational": 3, 00:17:31.196 "base_bdevs_list": [ 00:17:31.196 { 00:17:31.196 "name": "spare", 00:17:31.196 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": null, 00:17:31.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.196 "is_configured": false, 00:17:31.196 "data_offset": 0, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": "BaseBdev3", 00:17:31.196 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 }, 00:17:31.196 { 00:17:31.196 "name": "BaseBdev4", 00:17:31.196 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:31.196 "is_configured": true, 00:17:31.196 "data_offset": 2048, 00:17:31.196 "data_size": 63488 00:17:31.196 } 00:17:31.196 ] 00:17:31.196 }' 00:17:31.197 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.197 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.765 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.765 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.765 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.765 [2024-11-20 07:14:13.848388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.765 [2024-11-20 07:14:13.848501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.765 [2024-11-20 07:14:13.848647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.765 [2024-11-20 07:14:13.848784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.765 [2024-11-20 07:14:13.848836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:31.765 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.765 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.766 07:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.024 /dev/nbd0 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.024 1+0 records in 00:17:32.024 1+0 records out 00:17:32.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578456 s, 7.1 MB/s 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.024 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:32.282 /dev/nbd1 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.282 1+0 records in 00:17:32.282 1+0 records out 00:17:32.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302356 s, 13.5 MB/s 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.282 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.540 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.798 07:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.056 [2024-11-20 07:14:15.292373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.056 [2024-11-20 07:14:15.292460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.056 [2024-11-20 07:14:15.292491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:33.056 [2024-11-20 07:14:15.292502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.056 [2024-11-20 07:14:15.295104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.056 [2024-11-20 07:14:15.295162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.056 [2024-11-20 07:14:15.295283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:33.056 [2024-11-20 07:14:15.295373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.056 [2024-11-20 07:14:15.295556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.056 [2024-11-20 07:14:15.295664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:33.056 spare 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.056 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.314 [2024-11-20 07:14:15.395587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:33.314 [2024-11-20 07:14:15.395739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:33.314 [2024-11-20 07:14:15.396184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:33.314 [2024-11-20 07:14:15.396510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:33.314 [2024-11-20 07:14:15.396572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:33.314 [2024-11-20 07:14:15.396857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.314 "name": "raid_bdev1", 00:17:33.314 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:33.314 "strip_size_kb": 0, 00:17:33.314 "state": "online", 00:17:33.314 "raid_level": "raid1", 00:17:33.314 "superblock": true, 00:17:33.314 "num_base_bdevs": 4, 00:17:33.314 "num_base_bdevs_discovered": 3, 00:17:33.314 "num_base_bdevs_operational": 3, 00:17:33.314 "base_bdevs_list": [ 00:17:33.314 { 00:17:33.314 "name": "spare", 00:17:33.314 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:33.314 "is_configured": true, 00:17:33.314 "data_offset": 2048, 00:17:33.314 "data_size": 63488 00:17:33.314 }, 00:17:33.314 { 00:17:33.314 "name": null, 00:17:33.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.314 "is_configured": false, 00:17:33.314 "data_offset": 2048, 00:17:33.314 "data_size": 63488 00:17:33.314 }, 00:17:33.314 { 00:17:33.314 "name": "BaseBdev3", 00:17:33.314 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:33.314 "is_configured": true, 00:17:33.314 "data_offset": 2048, 00:17:33.314 "data_size": 63488 00:17:33.314 }, 00:17:33.314 { 00:17:33.314 "name": "BaseBdev4", 00:17:33.314 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:33.314 "is_configured": true, 00:17:33.314 "data_offset": 2048, 00:17:33.314 "data_size": 63488 00:17:33.314 } 00:17:33.314 ] 00:17:33.314 }' 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.314 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.880 "name": "raid_bdev1", 00:17:33.880 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:33.880 "strip_size_kb": 0, 00:17:33.880 "state": "online", 00:17:33.880 "raid_level": "raid1", 00:17:33.880 "superblock": true, 00:17:33.880 "num_base_bdevs": 4, 00:17:33.880 "num_base_bdevs_discovered": 3, 00:17:33.880 "num_base_bdevs_operational": 3, 00:17:33.880 "base_bdevs_list": [ 00:17:33.880 { 00:17:33.880 "name": "spare", 00:17:33.880 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:33.880 "is_configured": true, 00:17:33.880 "data_offset": 2048, 00:17:33.880 "data_size": 63488 00:17:33.880 }, 00:17:33.880 { 00:17:33.880 "name": null, 00:17:33.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.880 "is_configured": false, 00:17:33.880 "data_offset": 2048, 00:17:33.880 "data_size": 63488 00:17:33.880 }, 00:17:33.880 { 00:17:33.880 "name": "BaseBdev3", 00:17:33.880 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:33.880 "is_configured": true, 00:17:33.880 "data_offset": 2048, 00:17:33.880 "data_size": 63488 00:17:33.880 }, 00:17:33.880 { 00:17:33.880 "name": "BaseBdev4", 00:17:33.880 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:33.880 "is_configured": true, 00:17:33.880 "data_offset": 2048, 00:17:33.880 "data_size": 63488 00:17:33.880 } 00:17:33.880 ] 00:17:33.880 }' 00:17:33.880 07:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.880 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.880 [2024-11-20 07:14:16.139680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.138 "name": "raid_bdev1", 00:17:34.138 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:34.138 "strip_size_kb": 0, 00:17:34.138 "state": "online", 00:17:34.138 "raid_level": "raid1", 00:17:34.138 "superblock": true, 00:17:34.138 "num_base_bdevs": 4, 00:17:34.138 "num_base_bdevs_discovered": 2, 00:17:34.138 "num_base_bdevs_operational": 2, 00:17:34.138 "base_bdevs_list": [ 00:17:34.138 { 00:17:34.138 "name": null, 00:17:34.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.138 "is_configured": false, 00:17:34.138 "data_offset": 0, 00:17:34.138 "data_size": 63488 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": null, 00:17:34.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.138 "is_configured": false, 00:17:34.138 "data_offset": 2048, 00:17:34.138 "data_size": 63488 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": "BaseBdev3", 00:17:34.138 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:34.138 "is_configured": true, 00:17:34.138 "data_offset": 2048, 00:17:34.138 "data_size": 63488 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": "BaseBdev4", 00:17:34.138 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:34.138 "is_configured": true, 00:17:34.138 "data_offset": 2048, 00:17:34.138 "data_size": 63488 00:17:34.138 } 00:17:34.138 ] 00:17:34.138 }' 00:17:34.138 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.139 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.396 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.396 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 [2024-11-20 07:14:16.658838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.396 [2024-11-20 07:14:16.659119] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:34.396 [2024-11-20 07:14:16.659188] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.396 [2024-11-20 07:14:16.659274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.653 [2024-11-20 07:14:16.676802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:34.653 07:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.653 07:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:34.654 [2024-11-20 07:14:16.679182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.587 "name": "raid_bdev1", 00:17:35.587 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:35.587 "strip_size_kb": 0, 00:17:35.587 "state": "online", 00:17:35.587 "raid_level": "raid1", 00:17:35.587 "superblock": true, 00:17:35.587 "num_base_bdevs": 4, 00:17:35.587 "num_base_bdevs_discovered": 3, 00:17:35.587 "num_base_bdevs_operational": 3, 00:17:35.587 "process": { 00:17:35.587 "type": "rebuild", 00:17:35.587 "target": "spare", 00:17:35.587 "progress": { 00:17:35.587 "blocks": 20480, 00:17:35.587 "percent": 32 00:17:35.587 } 00:17:35.587 }, 00:17:35.587 "base_bdevs_list": [ 00:17:35.587 { 00:17:35.587 "name": "spare", 00:17:35.587 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:35.587 "is_configured": true, 00:17:35.587 "data_offset": 2048, 00:17:35.587 "data_size": 63488 00:17:35.587 }, 00:17:35.587 { 00:17:35.587 "name": null, 00:17:35.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.587 "is_configured": false, 00:17:35.587 "data_offset": 2048, 00:17:35.587 "data_size": 63488 00:17:35.587 }, 00:17:35.587 { 00:17:35.587 "name": "BaseBdev3", 00:17:35.587 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:35.587 "is_configured": true, 00:17:35.587 "data_offset": 2048, 00:17:35.587 "data_size": 63488 00:17:35.587 }, 00:17:35.587 { 00:17:35.587 "name": "BaseBdev4", 00:17:35.587 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:35.587 "is_configured": true, 00:17:35.587 "data_offset": 2048, 00:17:35.587 "data_size": 63488 00:17:35.587 } 00:17:35.587 ] 00:17:35.587 }' 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.587 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.588 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.845 [2024-11-20 07:14:17.850596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.845 [2024-11-20 07:14:17.885519] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.845 [2024-11-20 07:14:17.885747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.845 [2024-11-20 07:14:17.885777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.845 [2024-11-20 07:14:17.885787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.845 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.845 "name": "raid_bdev1", 00:17:35.845 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:35.845 "strip_size_kb": 0, 00:17:35.845 "state": "online", 00:17:35.845 "raid_level": "raid1", 00:17:35.845 "superblock": true, 00:17:35.845 "num_base_bdevs": 4, 00:17:35.845 "num_base_bdevs_discovered": 2, 00:17:35.845 "num_base_bdevs_operational": 2, 00:17:35.845 "base_bdevs_list": [ 00:17:35.846 { 00:17:35.846 "name": null, 00:17:35.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.846 "is_configured": false, 00:17:35.846 "data_offset": 0, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": null, 00:17:35.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.846 "is_configured": false, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": "BaseBdev3", 00:17:35.846 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": "BaseBdev4", 00:17:35.846 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 } 00:17:35.846 ] 00:17:35.846 }' 00:17:35.846 07:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.846 07:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.411 07:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.411 07:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.411 07:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.411 [2024-11-20 07:14:18.403671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.411 [2024-11-20 07:14:18.403828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.411 [2024-11-20 07:14:18.403881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:36.411 [2024-11-20 07:14:18.403952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.411 [2024-11-20 07:14:18.404539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.411 [2024-11-20 07:14:18.404613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.411 [2024-11-20 07:14:18.404758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:36.411 [2024-11-20 07:14:18.404805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:36.411 [2024-11-20 07:14:18.404859] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:36.411 [2024-11-20 07:14:18.404987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.411 [2024-11-20 07:14:18.422003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:36.411 spare 00:17:36.411 07:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.411 07:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:36.411 [2024-11-20 07:14:18.424405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.348 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.348 "name": "raid_bdev1", 00:17:37.348 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:37.348 "strip_size_kb": 0, 00:17:37.348 "state": "online", 00:17:37.348 "raid_level": "raid1", 00:17:37.348 "superblock": true, 00:17:37.348 "num_base_bdevs": 4, 00:17:37.348 "num_base_bdevs_discovered": 3, 00:17:37.348 "num_base_bdevs_operational": 3, 00:17:37.348 "process": { 00:17:37.348 "type": "rebuild", 00:17:37.348 "target": "spare", 00:17:37.348 "progress": { 00:17:37.348 "blocks": 20480, 00:17:37.348 "percent": 32 00:17:37.348 } 00:17:37.348 }, 00:17:37.348 "base_bdevs_list": [ 00:17:37.348 { 00:17:37.348 "name": "spare", 00:17:37.348 "uuid": "b56a6fb6-ad8d-56f6-983a-529e96f30f57", 00:17:37.348 "is_configured": true, 00:17:37.348 "data_offset": 2048, 00:17:37.348 "data_size": 63488 00:17:37.348 }, 00:17:37.348 { 00:17:37.349 "name": null, 00:17:37.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.349 "is_configured": false, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 }, 00:17:37.349 { 00:17:37.349 "name": "BaseBdev3", 00:17:37.349 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 }, 00:17:37.349 { 00:17:37.349 "name": "BaseBdev4", 00:17:37.349 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 } 00:17:37.349 ] 00:17:37.349 }' 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.349 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.349 [2024-11-20 07:14:19.571589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.608 [2024-11-20 07:14:19.630634] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.608 [2024-11-20 07:14:19.630719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.608 [2024-11-20 07:14:19.630738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.608 [2024-11-20 07:14:19.630749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.608 "name": "raid_bdev1", 00:17:37.608 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:37.608 "strip_size_kb": 0, 00:17:37.608 "state": "online", 00:17:37.608 "raid_level": "raid1", 00:17:37.608 "superblock": true, 00:17:37.608 "num_base_bdevs": 4, 00:17:37.608 "num_base_bdevs_discovered": 2, 00:17:37.608 "num_base_bdevs_operational": 2, 00:17:37.608 "base_bdevs_list": [ 00:17:37.608 { 00:17:37.608 "name": null, 00:17:37.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.608 "is_configured": false, 00:17:37.608 "data_offset": 0, 00:17:37.608 "data_size": 63488 00:17:37.608 }, 00:17:37.608 { 00:17:37.608 "name": null, 00:17:37.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.608 "is_configured": false, 00:17:37.608 "data_offset": 2048, 00:17:37.608 "data_size": 63488 00:17:37.608 }, 00:17:37.608 { 00:17:37.608 "name": "BaseBdev3", 00:17:37.608 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:37.608 "is_configured": true, 00:17:37.608 "data_offset": 2048, 00:17:37.608 "data_size": 63488 00:17:37.608 }, 00:17:37.608 { 00:17:37.608 "name": "BaseBdev4", 00:17:37.608 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:37.608 "is_configured": true, 00:17:37.608 "data_offset": 2048, 00:17:37.608 "data_size": 63488 00:17:37.608 } 00:17:37.608 ] 00:17:37.608 }' 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.608 07:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.176 "name": "raid_bdev1", 00:17:38.176 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:38.176 "strip_size_kb": 0, 00:17:38.176 "state": "online", 00:17:38.176 "raid_level": "raid1", 00:17:38.176 "superblock": true, 00:17:38.176 "num_base_bdevs": 4, 00:17:38.176 "num_base_bdevs_discovered": 2, 00:17:38.176 "num_base_bdevs_operational": 2, 00:17:38.176 "base_bdevs_list": [ 00:17:38.176 { 00:17:38.176 "name": null, 00:17:38.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.176 "is_configured": false, 00:17:38.176 "data_offset": 0, 00:17:38.176 "data_size": 63488 00:17:38.176 }, 00:17:38.176 { 00:17:38.176 "name": null, 00:17:38.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.176 "is_configured": false, 00:17:38.176 "data_offset": 2048, 00:17:38.176 "data_size": 63488 00:17:38.176 }, 00:17:38.176 { 00:17:38.176 "name": "BaseBdev3", 00:17:38.176 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:38.176 "is_configured": true, 00:17:38.176 "data_offset": 2048, 00:17:38.176 "data_size": 63488 00:17:38.176 }, 00:17:38.176 { 00:17:38.176 "name": "BaseBdev4", 00:17:38.176 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:38.176 "is_configured": true, 00:17:38.176 "data_offset": 2048, 00:17:38.176 "data_size": 63488 00:17:38.176 } 00:17:38.176 ] 00:17:38.176 }' 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 [2024-11-20 07:14:20.290983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.176 [2024-11-20 07:14:20.291117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.176 [2024-11-20 07:14:20.291161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:38.176 [2024-11-20 07:14:20.291177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.176 [2024-11-20 07:14:20.291684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.176 [2024-11-20 07:14:20.291719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.176 [2024-11-20 07:14:20.291813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:38.176 [2024-11-20 07:14:20.291833] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:38.176 [2024-11-20 07:14:20.291848] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.176 [2024-11-20 07:14:20.291879] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:38.176 BaseBdev1 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 07:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.114 "name": "raid_bdev1", 00:17:39.114 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:39.114 "strip_size_kb": 0, 00:17:39.114 "state": "online", 00:17:39.114 "raid_level": "raid1", 00:17:39.114 "superblock": true, 00:17:39.114 "num_base_bdevs": 4, 00:17:39.114 "num_base_bdevs_discovered": 2, 00:17:39.114 "num_base_bdevs_operational": 2, 00:17:39.114 "base_bdevs_list": [ 00:17:39.114 { 00:17:39.114 "name": null, 00:17:39.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.114 "is_configured": false, 00:17:39.114 "data_offset": 0, 00:17:39.114 "data_size": 63488 00:17:39.114 }, 00:17:39.114 { 00:17:39.114 "name": null, 00:17:39.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.114 "is_configured": false, 00:17:39.114 "data_offset": 2048, 00:17:39.114 "data_size": 63488 00:17:39.114 }, 00:17:39.114 { 00:17:39.114 "name": "BaseBdev3", 00:17:39.114 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:39.114 "is_configured": true, 00:17:39.114 "data_offset": 2048, 00:17:39.114 "data_size": 63488 00:17:39.114 }, 00:17:39.114 { 00:17:39.114 "name": "BaseBdev4", 00:17:39.114 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:39.114 "is_configured": true, 00:17:39.114 "data_offset": 2048, 00:17:39.114 "data_size": 63488 00:17:39.114 } 00:17:39.114 ] 00:17:39.114 }' 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.114 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.683 "name": "raid_bdev1", 00:17:39.683 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:39.683 "strip_size_kb": 0, 00:17:39.683 "state": "online", 00:17:39.683 "raid_level": "raid1", 00:17:39.683 "superblock": true, 00:17:39.683 "num_base_bdevs": 4, 00:17:39.683 "num_base_bdevs_discovered": 2, 00:17:39.683 "num_base_bdevs_operational": 2, 00:17:39.683 "base_bdevs_list": [ 00:17:39.683 { 00:17:39.683 "name": null, 00:17:39.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.683 "is_configured": false, 00:17:39.683 "data_offset": 0, 00:17:39.683 "data_size": 63488 00:17:39.683 }, 00:17:39.683 { 00:17:39.683 "name": null, 00:17:39.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.683 "is_configured": false, 00:17:39.683 "data_offset": 2048, 00:17:39.683 "data_size": 63488 00:17:39.683 }, 00:17:39.683 { 00:17:39.683 "name": "BaseBdev3", 00:17:39.683 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:39.683 "is_configured": true, 00:17:39.683 "data_offset": 2048, 00:17:39.683 "data_size": 63488 00:17:39.683 }, 00:17:39.683 { 00:17:39.683 "name": "BaseBdev4", 00:17:39.683 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:39.683 "is_configured": true, 00:17:39.683 "data_offset": 2048, 00:17:39.683 "data_size": 63488 00:17:39.683 } 00:17:39.683 ] 00:17:39.683 }' 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.683 [2024-11-20 07:14:21.916446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.683 [2024-11-20 07:14:21.916790] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:39.683 [2024-11-20 07:14:21.916892] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.683 request: 00:17:39.683 { 00:17:39.683 "base_bdev": "BaseBdev1", 00:17:39.683 "raid_bdev": "raid_bdev1", 00:17:39.683 "method": "bdev_raid_add_base_bdev", 00:17:39.683 "req_id": 1 00:17:39.683 } 00:17:39.683 Got JSON-RPC error response 00:17:39.683 response: 00:17:39.683 { 00:17:39.683 "code": -22, 00:17:39.683 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:39.683 } 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.683 07:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.061 "name": "raid_bdev1", 00:17:41.061 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:41.061 "strip_size_kb": 0, 00:17:41.061 "state": "online", 00:17:41.061 "raid_level": "raid1", 00:17:41.061 "superblock": true, 00:17:41.061 "num_base_bdevs": 4, 00:17:41.061 "num_base_bdevs_discovered": 2, 00:17:41.061 "num_base_bdevs_operational": 2, 00:17:41.061 "base_bdevs_list": [ 00:17:41.061 { 00:17:41.061 "name": null, 00:17:41.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.061 "is_configured": false, 00:17:41.061 "data_offset": 0, 00:17:41.061 "data_size": 63488 00:17:41.061 }, 00:17:41.061 { 00:17:41.061 "name": null, 00:17:41.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.061 "is_configured": false, 00:17:41.061 "data_offset": 2048, 00:17:41.061 "data_size": 63488 00:17:41.061 }, 00:17:41.061 { 00:17:41.061 "name": "BaseBdev3", 00:17:41.061 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:41.061 "is_configured": true, 00:17:41.061 "data_offset": 2048, 00:17:41.061 "data_size": 63488 00:17:41.061 }, 00:17:41.061 { 00:17:41.061 "name": "BaseBdev4", 00:17:41.061 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:41.061 "is_configured": true, 00:17:41.061 "data_offset": 2048, 00:17:41.061 "data_size": 63488 00:17:41.061 } 00:17:41.061 ] 00:17:41.061 }' 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.061 07:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.322 "name": "raid_bdev1", 00:17:41.322 "uuid": "b8e169b9-6a65-4951-bdad-7bb46530b3cd", 00:17:41.322 "strip_size_kb": 0, 00:17:41.322 "state": "online", 00:17:41.322 "raid_level": "raid1", 00:17:41.322 "superblock": true, 00:17:41.322 "num_base_bdevs": 4, 00:17:41.322 "num_base_bdevs_discovered": 2, 00:17:41.322 "num_base_bdevs_operational": 2, 00:17:41.322 "base_bdevs_list": [ 00:17:41.322 { 00:17:41.322 "name": null, 00:17:41.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.322 "is_configured": false, 00:17:41.322 "data_offset": 0, 00:17:41.322 "data_size": 63488 00:17:41.322 }, 00:17:41.322 { 00:17:41.322 "name": null, 00:17:41.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.322 "is_configured": false, 00:17:41.322 "data_offset": 2048, 00:17:41.322 "data_size": 63488 00:17:41.322 }, 00:17:41.322 { 00:17:41.322 "name": "BaseBdev3", 00:17:41.322 "uuid": "9de45796-4091-5b4d-a345-8c72c03a2c05", 00:17:41.322 "is_configured": true, 00:17:41.322 "data_offset": 2048, 00:17:41.322 "data_size": 63488 00:17:41.322 }, 00:17:41.322 { 00:17:41.322 "name": "BaseBdev4", 00:17:41.322 "uuid": "91edaf0e-1005-5b84-9a50-55bf54c5635f", 00:17:41.322 "is_configured": true, 00:17:41.322 "data_offset": 2048, 00:17:41.322 "data_size": 63488 00:17:41.322 } 00:17:41.322 ] 00:17:41.322 }' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78431 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78431 ']' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78431 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78431 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.322 killing process with pid 78431 00:17:41.322 Received shutdown signal, test time was about 60.000000 seconds 00:17:41.322 00:17:41.322 Latency(us) 00:17:41.322 [2024-11-20T07:14:23.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.322 [2024-11-20T07:14:23.587Z] =================================================================================================================== 00:17:41.322 [2024-11-20T07:14:23.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78431' 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78431 00:17:41.322 [2024-11-20 07:14:23.549181] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.322 [2024-11-20 07:14:23.549315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.322 07:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78431 00:17:41.322 [2024-11-20 07:14:23.549414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.322 [2024-11-20 07:14:23.549424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:41.891 [2024-11-20 07:14:24.085431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:43.272 00:17:43.272 real 0m27.654s 00:17:43.272 user 0m32.870s 00:17:43.272 sys 0m4.479s 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.272 ************************************ 00:17:43.272 END TEST raid_rebuild_test_sb 00:17:43.272 ************************************ 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.272 07:14:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:43.272 07:14:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:43.272 07:14:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.272 07:14:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.272 ************************************ 00:17:43.272 START TEST raid_rebuild_test_io 00:17:43.272 ************************************ 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79214 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79214 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79214 ']' 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.272 07:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.272 [2024-11-20 07:14:25.471377] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:43.272 [2024-11-20 07:14:25.471566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:43.272 Zero copy mechanism will not be used. 00:17:43.272 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79214 ] 00:17:43.531 [2024-11-20 07:14:25.645173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.531 [2024-11-20 07:14:25.774374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.790 [2024-11-20 07:14:26.004694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.790 [2024-11-20 07:14:26.004851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.359 BaseBdev1_malloc 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.359 [2024-11-20 07:14:26.426612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:44.359 [2024-11-20 07:14:26.426810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.359 [2024-11-20 07:14:26.426873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:44.359 [2024-11-20 07:14:26.426915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.359 [2024-11-20 07:14:26.429595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.359 [2024-11-20 07:14:26.429653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:44.359 BaseBdev1 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.359 BaseBdev2_malloc 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.359 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.359 [2024-11-20 07:14:26.485311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:44.359 [2024-11-20 07:14:26.485474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.359 [2024-11-20 07:14:26.485529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:44.359 [2024-11-20 07:14:26.485568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.359 [2024-11-20 07:14:26.488007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.360 [2024-11-20 07:14:26.488096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:44.360 BaseBdev2 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.360 BaseBdev3_malloc 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.360 [2024-11-20 07:14:26.550111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:44.360 [2024-11-20 07:14:26.550186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.360 [2024-11-20 07:14:26.550210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:44.360 [2024-11-20 07:14:26.550222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.360 [2024-11-20 07:14:26.552508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.360 [2024-11-20 07:14:26.552553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:44.360 BaseBdev3 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.360 BaseBdev4_malloc 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.360 [2024-11-20 07:14:26.605268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:44.360 [2024-11-20 07:14:26.605351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.360 [2024-11-20 07:14:26.605376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:44.360 [2024-11-20 07:14:26.605388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.360 [2024-11-20 07:14:26.607605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.360 BaseBdev4 00:17:44.360 [2024-11-20 07:14:26.607742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.619 spare_malloc 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.619 spare_delay 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.619 [2024-11-20 07:14:26.679188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.619 [2024-11-20 07:14:26.679361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.619 [2024-11-20 07:14:26.679421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:44.619 [2024-11-20 07:14:26.679459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.619 [2024-11-20 07:14:26.681850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.619 [2024-11-20 07:14:26.681954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.619 spare 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.619 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.619 [2024-11-20 07:14:26.691225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.619 [2024-11-20 07:14:26.693404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.620 [2024-11-20 07:14:26.693530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.620 [2024-11-20 07:14:26.693636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:44.620 [2024-11-20 07:14:26.693766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:44.620 [2024-11-20 07:14:26.693817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:44.620 [2024-11-20 07:14:26.694137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:44.620 [2024-11-20 07:14:26.694416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:44.620 [2024-11-20 07:14:26.694469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:44.620 [2024-11-20 07:14:26.694707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.620 "name": "raid_bdev1", 00:17:44.620 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:44.620 "strip_size_kb": 0, 00:17:44.620 "state": "online", 00:17:44.620 "raid_level": "raid1", 00:17:44.620 "superblock": false, 00:17:44.620 "num_base_bdevs": 4, 00:17:44.620 "num_base_bdevs_discovered": 4, 00:17:44.620 "num_base_bdevs_operational": 4, 00:17:44.620 "base_bdevs_list": [ 00:17:44.620 { 00:17:44.620 "name": "BaseBdev1", 00:17:44.620 "uuid": "24d2b305-87de-5da3-b24a-797f4f9edc8a", 00:17:44.620 "is_configured": true, 00:17:44.620 "data_offset": 0, 00:17:44.620 "data_size": 65536 00:17:44.620 }, 00:17:44.620 { 00:17:44.620 "name": "BaseBdev2", 00:17:44.620 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:44.620 "is_configured": true, 00:17:44.620 "data_offset": 0, 00:17:44.620 "data_size": 65536 00:17:44.620 }, 00:17:44.620 { 00:17:44.620 "name": "BaseBdev3", 00:17:44.620 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:44.620 "is_configured": true, 00:17:44.620 "data_offset": 0, 00:17:44.620 "data_size": 65536 00:17:44.620 }, 00:17:44.620 { 00:17:44.620 "name": "BaseBdev4", 00:17:44.620 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:44.620 "is_configured": true, 00:17:44.620 "data_offset": 0, 00:17:44.620 "data_size": 65536 00:17:44.620 } 00:17:44.620 ] 00:17:44.620 }' 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.620 07:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 [2024-11-20 07:14:27.170727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 [2024-11-20 07:14:27.258227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.189 "name": "raid_bdev1", 00:17:45.189 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:45.189 "strip_size_kb": 0, 00:17:45.189 "state": "online", 00:17:45.189 "raid_level": "raid1", 00:17:45.189 "superblock": false, 00:17:45.189 "num_base_bdevs": 4, 00:17:45.189 "num_base_bdevs_discovered": 3, 00:17:45.189 "num_base_bdevs_operational": 3, 00:17:45.189 "base_bdevs_list": [ 00:17:45.189 { 00:17:45.189 "name": null, 00:17:45.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.189 "is_configured": false, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 65536 00:17:45.189 }, 00:17:45.189 { 00:17:45.189 "name": "BaseBdev2", 00:17:45.189 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:45.189 "is_configured": true, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 65536 00:17:45.189 }, 00:17:45.189 { 00:17:45.189 "name": "BaseBdev3", 00:17:45.189 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:45.189 "is_configured": true, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 65536 00:17:45.189 }, 00:17:45.189 { 00:17:45.189 "name": "BaseBdev4", 00:17:45.189 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:45.189 "is_configured": true, 00:17:45.189 "data_offset": 0, 00:17:45.189 "data_size": 65536 00:17:45.189 } 00:17:45.189 ] 00:17:45.189 }' 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.189 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.189 [2024-11-20 07:14:27.406862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:45.189 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:45.189 Zero copy mechanism will not be used. 00:17:45.189 Running I/O for 60 seconds... 00:17:45.758 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.758 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.758 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.758 [2024-11-20 07:14:27.741459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.758 07:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.758 07:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:45.758 [2024-11-20 07:14:27.827769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:45.758 [2024-11-20 07:14:27.830208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.758 [2024-11-20 07:14:27.942323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:45.758 [2024-11-20 07:14:27.943081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:46.017 [2024-11-20 07:14:28.056912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:46.017 [2024-11-20 07:14:28.057400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:46.275 134.00 IOPS, 402.00 MiB/s [2024-11-20T07:14:28.540Z] [2024-11-20 07:14:28.443372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:46.275 [2024-11-20 07:14:28.444271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:46.533 [2024-11-20 07:14:28.759877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.533 07:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.792 "name": "raid_bdev1", 00:17:46.792 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:46.792 "strip_size_kb": 0, 00:17:46.792 "state": "online", 00:17:46.792 "raid_level": "raid1", 00:17:46.792 "superblock": false, 00:17:46.792 "num_base_bdevs": 4, 00:17:46.792 "num_base_bdevs_discovered": 4, 00:17:46.792 "num_base_bdevs_operational": 4, 00:17:46.792 "process": { 00:17:46.792 "type": "rebuild", 00:17:46.792 "target": "spare", 00:17:46.792 "progress": { 00:17:46.792 "blocks": 14336, 00:17:46.792 "percent": 21 00:17:46.792 } 00:17:46.792 }, 00:17:46.792 "base_bdevs_list": [ 00:17:46.792 { 00:17:46.792 "name": "spare", 00:17:46.792 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:46.792 "is_configured": true, 00:17:46.792 "data_offset": 0, 00:17:46.792 "data_size": 65536 00:17:46.792 }, 00:17:46.792 { 00:17:46.792 "name": "BaseBdev2", 00:17:46.792 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:46.792 "is_configured": true, 00:17:46.792 "data_offset": 0, 00:17:46.792 "data_size": 65536 00:17:46.792 }, 00:17:46.792 { 00:17:46.792 "name": "BaseBdev3", 00:17:46.792 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:46.792 "is_configured": true, 00:17:46.792 "data_offset": 0, 00:17:46.792 "data_size": 65536 00:17:46.792 }, 00:17:46.792 { 00:17:46.792 "name": "BaseBdev4", 00:17:46.792 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:46.792 "is_configured": true, 00:17:46.792 "data_offset": 0, 00:17:46.792 "data_size": 65536 00:17:46.792 } 00:17:46.792 ] 00:17:46.792 }' 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.792 07:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.792 [2024-11-20 07:14:28.939620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.792 [2024-11-20 07:14:28.981911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:46.792 [2024-11-20 07:14:28.982736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:47.050 [2024-11-20 07:14:29.085487] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.050 [2024-11-20 07:14:29.097631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.050 [2024-11-20 07:14:29.097730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.050 [2024-11-20 07:14:29.097748] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.050 [2024-11-20 07:14:29.139060] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.050 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.050 "name": "raid_bdev1", 00:17:47.050 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:47.050 "strip_size_kb": 0, 00:17:47.050 "state": "online", 00:17:47.050 "raid_level": "raid1", 00:17:47.050 "superblock": false, 00:17:47.050 "num_base_bdevs": 4, 00:17:47.050 "num_base_bdevs_discovered": 3, 00:17:47.050 "num_base_bdevs_operational": 3, 00:17:47.050 "base_bdevs_list": [ 00:17:47.050 { 00:17:47.050 "name": null, 00:17:47.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.050 "is_configured": false, 00:17:47.050 "data_offset": 0, 00:17:47.050 "data_size": 65536 00:17:47.050 }, 00:17:47.050 { 00:17:47.050 "name": "BaseBdev2", 00:17:47.050 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:47.050 "is_configured": true, 00:17:47.050 "data_offset": 0, 00:17:47.050 "data_size": 65536 00:17:47.050 }, 00:17:47.050 { 00:17:47.050 "name": "BaseBdev3", 00:17:47.050 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:47.050 "is_configured": true, 00:17:47.050 "data_offset": 0, 00:17:47.050 "data_size": 65536 00:17:47.050 }, 00:17:47.051 { 00:17:47.051 "name": "BaseBdev4", 00:17:47.051 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:47.051 "is_configured": true, 00:17:47.051 "data_offset": 0, 00:17:47.051 "data_size": 65536 00:17:47.051 } 00:17:47.051 ] 00:17:47.051 }' 00:17:47.051 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.051 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.568 111.00 IOPS, 333.00 MiB/s [2024-11-20T07:14:29.833Z] 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.568 "name": "raid_bdev1", 00:17:47.568 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:47.568 "strip_size_kb": 0, 00:17:47.568 "state": "online", 00:17:47.568 "raid_level": "raid1", 00:17:47.568 "superblock": false, 00:17:47.568 "num_base_bdevs": 4, 00:17:47.568 "num_base_bdevs_discovered": 3, 00:17:47.568 "num_base_bdevs_operational": 3, 00:17:47.568 "base_bdevs_list": [ 00:17:47.568 { 00:17:47.568 "name": null, 00:17:47.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.568 "is_configured": false, 00:17:47.568 "data_offset": 0, 00:17:47.568 "data_size": 65536 00:17:47.568 }, 00:17:47.568 { 00:17:47.568 "name": "BaseBdev2", 00:17:47.568 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:47.568 "is_configured": true, 00:17:47.568 "data_offset": 0, 00:17:47.568 "data_size": 65536 00:17:47.568 }, 00:17:47.568 { 00:17:47.568 "name": "BaseBdev3", 00:17:47.568 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:47.568 "is_configured": true, 00:17:47.568 "data_offset": 0, 00:17:47.568 "data_size": 65536 00:17:47.568 }, 00:17:47.568 { 00:17:47.568 "name": "BaseBdev4", 00:17:47.568 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:47.568 "is_configured": true, 00:17:47.568 "data_offset": 0, 00:17:47.568 "data_size": 65536 00:17:47.568 } 00:17:47.568 ] 00:17:47.568 }' 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.568 [2024-11-20 07:14:29.770636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.568 07:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:47.827 [2024-11-20 07:14:29.856525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:47.827 [2024-11-20 07:14:29.858886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.827 [2024-11-20 07:14:29.978471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:47.827 [2024-11-20 07:14:29.979129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:48.085 [2024-11-20 07:14:30.099673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:48.085 [2024-11-20 07:14:30.100031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:48.344 131.33 IOPS, 394.00 MiB/s [2024-11-20T07:14:30.609Z] [2024-11-20 07:14:30.426538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:48.344 [2024-11-20 07:14:30.428085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:48.603 [2024-11-20 07:14:30.686584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.603 07:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.863 "name": "raid_bdev1", 00:17:48.863 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:48.863 "strip_size_kb": 0, 00:17:48.863 "state": "online", 00:17:48.863 "raid_level": "raid1", 00:17:48.863 "superblock": false, 00:17:48.863 "num_base_bdevs": 4, 00:17:48.863 "num_base_bdevs_discovered": 4, 00:17:48.863 "num_base_bdevs_operational": 4, 00:17:48.863 "process": { 00:17:48.863 "type": "rebuild", 00:17:48.863 "target": "spare", 00:17:48.863 "progress": { 00:17:48.863 "blocks": 12288, 00:17:48.863 "percent": 18 00:17:48.863 } 00:17:48.863 }, 00:17:48.863 "base_bdevs_list": [ 00:17:48.863 { 00:17:48.863 "name": "spare", 00:17:48.863 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:48.863 "is_configured": true, 00:17:48.863 "data_offset": 0, 00:17:48.863 "data_size": 65536 00:17:48.863 }, 00:17:48.863 { 00:17:48.863 "name": "BaseBdev2", 00:17:48.863 "uuid": "6d9bdd02-6dd5-5983-a441-72f2998ddb69", 00:17:48.863 "is_configured": true, 00:17:48.863 "data_offset": 0, 00:17:48.863 "data_size": 65536 00:17:48.863 }, 00:17:48.863 { 00:17:48.863 "name": "BaseBdev3", 00:17:48.863 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:48.863 "is_configured": true, 00:17:48.863 "data_offset": 0, 00:17:48.863 "data_size": 65536 00:17:48.863 }, 00:17:48.863 { 00:17:48.863 "name": "BaseBdev4", 00:17:48.863 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:48.863 "is_configured": true, 00:17:48.863 "data_offset": 0, 00:17:48.863 "data_size": 65536 00:17:48.863 } 00:17:48.863 ] 00:17:48.863 }' 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.863 [2024-11-20 07:14:30.945648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.863 07:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.863 [2024-11-20 07:14:30.981052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.863 [2024-11-20 07:14:31.074978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:49.123 [2024-11-20 07:14:31.203836] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:49.123 [2024-11-20 07:14:31.203983] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.123 "name": "raid_bdev1", 00:17:49.123 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:49.123 "strip_size_kb": 0, 00:17:49.123 "state": "online", 00:17:49.123 "raid_level": "raid1", 00:17:49.123 "superblock": false, 00:17:49.123 "num_base_bdevs": 4, 00:17:49.123 "num_base_bdevs_discovered": 3, 00:17:49.123 "num_base_bdevs_operational": 3, 00:17:49.123 "process": { 00:17:49.123 "type": "rebuild", 00:17:49.123 "target": "spare", 00:17:49.123 "progress": { 00:17:49.123 "blocks": 16384, 00:17:49.123 "percent": 25 00:17:49.123 } 00:17:49.123 }, 00:17:49.123 "base_bdevs_list": [ 00:17:49.123 { 00:17:49.123 "name": "spare", 00:17:49.123 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:49.123 "is_configured": true, 00:17:49.123 "data_offset": 0, 00:17:49.123 "data_size": 65536 00:17:49.123 }, 00:17:49.123 { 00:17:49.123 "name": null, 00:17:49.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.123 "is_configured": false, 00:17:49.123 "data_offset": 0, 00:17:49.123 "data_size": 65536 00:17:49.123 }, 00:17:49.123 { 00:17:49.123 "name": "BaseBdev3", 00:17:49.123 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:49.123 "is_configured": true, 00:17:49.123 "data_offset": 0, 00:17:49.123 "data_size": 65536 00:17:49.123 }, 00:17:49.123 { 00:17:49.123 "name": "BaseBdev4", 00:17:49.123 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:49.123 "is_configured": true, 00:17:49.123 "data_offset": 0, 00:17:49.123 "data_size": 65536 00:17:49.123 } 00:17:49.123 ] 00:17:49.123 }' 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.123 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.124 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.384 "name": "raid_bdev1", 00:17:49.384 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:49.384 "strip_size_kb": 0, 00:17:49.384 "state": "online", 00:17:49.384 "raid_level": "raid1", 00:17:49.384 "superblock": false, 00:17:49.384 "num_base_bdevs": 4, 00:17:49.384 "num_base_bdevs_discovered": 3, 00:17:49.384 "num_base_bdevs_operational": 3, 00:17:49.384 "process": { 00:17:49.384 "type": "rebuild", 00:17:49.384 "target": "spare", 00:17:49.384 "progress": { 00:17:49.384 "blocks": 18432, 00:17:49.384 "percent": 28 00:17:49.384 } 00:17:49.384 }, 00:17:49.384 "base_bdevs_list": [ 00:17:49.384 { 00:17:49.384 "name": "spare", 00:17:49.384 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:49.384 "is_configured": true, 00:17:49.384 "data_offset": 0, 00:17:49.384 "data_size": 65536 00:17:49.384 }, 00:17:49.384 { 00:17:49.384 "name": null, 00:17:49.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.384 "is_configured": false, 00:17:49.384 "data_offset": 0, 00:17:49.384 "data_size": 65536 00:17:49.384 }, 00:17:49.384 { 00:17:49.384 "name": "BaseBdev3", 00:17:49.384 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:49.384 "is_configured": true, 00:17:49.384 "data_offset": 0, 00:17:49.384 "data_size": 65536 00:17:49.384 }, 00:17:49.384 { 00:17:49.384 "name": "BaseBdev4", 00:17:49.384 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:49.384 "is_configured": true, 00:17:49.384 "data_offset": 0, 00:17:49.384 "data_size": 65536 00:17:49.384 } 00:17:49.384 ] 00:17:49.384 }' 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.384 112.00 IOPS, 336.00 MiB/s [2024-11-20T07:14:31.649Z] 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.384 [2024-11-20 07:14:31.463532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:49.384 [2024-11-20 07:14:31.464256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.384 07:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.644 [2024-11-20 07:14:31.668833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:49.903 [2024-11-20 07:14:31.995976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:50.203 [2024-11-20 07:14:32.200508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:50.485 100.60 IOPS, 301.80 MiB/s [2024-11-20T07:14:32.750Z] 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.485 [2024-11-20 07:14:32.560569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.485 "name": "raid_bdev1", 00:17:50.485 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:50.485 "strip_size_kb": 0, 00:17:50.485 "state": "online", 00:17:50.485 "raid_level": "raid1", 00:17:50.485 "superblock": false, 00:17:50.485 "num_base_bdevs": 4, 00:17:50.485 "num_base_bdevs_discovered": 3, 00:17:50.485 "num_base_bdevs_operational": 3, 00:17:50.485 "process": { 00:17:50.485 "type": "rebuild", 00:17:50.485 "target": "spare", 00:17:50.485 "progress": { 00:17:50.485 "blocks": 32768, 00:17:50.485 "percent": 50 00:17:50.485 } 00:17:50.485 }, 00:17:50.485 "base_bdevs_list": [ 00:17:50.485 { 00:17:50.485 "name": "spare", 00:17:50.485 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:50.485 "is_configured": true, 00:17:50.485 "data_offset": 0, 00:17:50.485 "data_size": 65536 00:17:50.485 }, 00:17:50.485 { 00:17:50.485 "name": null, 00:17:50.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.485 "is_configured": false, 00:17:50.485 "data_offset": 0, 00:17:50.485 "data_size": 65536 00:17:50.485 }, 00:17:50.485 { 00:17:50.485 "name": "BaseBdev3", 00:17:50.485 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:50.485 "is_configured": true, 00:17:50.485 "data_offset": 0, 00:17:50.485 "data_size": 65536 00:17:50.485 }, 00:17:50.485 { 00:17:50.485 "name": "BaseBdev4", 00:17:50.485 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:50.485 "is_configured": true, 00:17:50.485 "data_offset": 0, 00:17:50.485 "data_size": 65536 00:17:50.485 } 00:17:50.485 ] 00:17:50.485 }' 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.485 07:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:50.745 [2024-11-20 07:14:32.897935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:51.004 [2024-11-20 07:14:33.016444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:51.520 91.67 IOPS, 275.00 MiB/s [2024-11-20T07:14:33.785Z] 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.520 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.520 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.520 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.520 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.520 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.521 "name": "raid_bdev1", 00:17:51.521 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:51.521 "strip_size_kb": 0, 00:17:51.521 "state": "online", 00:17:51.521 "raid_level": "raid1", 00:17:51.521 "superblock": false, 00:17:51.521 "num_base_bdevs": 4, 00:17:51.521 "num_base_bdevs_discovered": 3, 00:17:51.521 "num_base_bdevs_operational": 3, 00:17:51.521 "process": { 00:17:51.521 "type": "rebuild", 00:17:51.521 "target": "spare", 00:17:51.521 "progress": { 00:17:51.521 "blocks": 49152, 00:17:51.521 "percent": 75 00:17:51.521 } 00:17:51.521 }, 00:17:51.521 "base_bdevs_list": [ 00:17:51.521 { 00:17:51.521 "name": "spare", 00:17:51.521 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:51.521 "is_configured": true, 00:17:51.521 "data_offset": 0, 00:17:51.521 "data_size": 65536 00:17:51.521 }, 00:17:51.521 { 00:17:51.521 "name": null, 00:17:51.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.521 "is_configured": false, 00:17:51.521 "data_offset": 0, 00:17:51.521 "data_size": 65536 00:17:51.521 }, 00:17:51.521 { 00:17:51.521 "name": "BaseBdev3", 00:17:51.521 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:51.521 "is_configured": true, 00:17:51.521 "data_offset": 0, 00:17:51.521 "data_size": 65536 00:17:51.521 }, 00:17:51.521 { 00:17:51.521 "name": "BaseBdev4", 00:17:51.521 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:51.521 "is_configured": true, 00:17:51.521 "data_offset": 0, 00:17:51.521 "data_size": 65536 00:17:51.521 } 00:17:51.521 ] 00:17:51.521 }' 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.521 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.779 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.779 07:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.346 86.00 IOPS, 258.00 MiB/s [2024-11-20T07:14:34.611Z] [2024-11-20 07:14:34.491212] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:52.346 [2024-11-20 07:14:34.596626] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:52.346 [2024-11-20 07:14:34.601182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.605 "name": "raid_bdev1", 00:17:52.605 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:52.605 "strip_size_kb": 0, 00:17:52.605 "state": "online", 00:17:52.605 "raid_level": "raid1", 00:17:52.605 "superblock": false, 00:17:52.605 "num_base_bdevs": 4, 00:17:52.605 "num_base_bdevs_discovered": 3, 00:17:52.605 "num_base_bdevs_operational": 3, 00:17:52.605 "base_bdevs_list": [ 00:17:52.605 { 00:17:52.605 "name": "spare", 00:17:52.605 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:52.605 "is_configured": true, 00:17:52.605 "data_offset": 0, 00:17:52.605 "data_size": 65536 00:17:52.605 }, 00:17:52.605 { 00:17:52.605 "name": null, 00:17:52.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.605 "is_configured": false, 00:17:52.605 "data_offset": 0, 00:17:52.605 "data_size": 65536 00:17:52.605 }, 00:17:52.605 { 00:17:52.605 "name": "BaseBdev3", 00:17:52.605 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:52.605 "is_configured": true, 00:17:52.605 "data_offset": 0, 00:17:52.605 "data_size": 65536 00:17:52.605 }, 00:17:52.605 { 00:17:52.605 "name": "BaseBdev4", 00:17:52.605 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:52.605 "is_configured": true, 00:17:52.605 "data_offset": 0, 00:17:52.605 "data_size": 65536 00:17:52.605 } 00:17:52.605 ] 00:17:52.605 }' 00:17:52.605 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.864 "name": "raid_bdev1", 00:17:52.864 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:52.864 "strip_size_kb": 0, 00:17:52.864 "state": "online", 00:17:52.864 "raid_level": "raid1", 00:17:52.864 "superblock": false, 00:17:52.864 "num_base_bdevs": 4, 00:17:52.864 "num_base_bdevs_discovered": 3, 00:17:52.864 "num_base_bdevs_operational": 3, 00:17:52.864 "base_bdevs_list": [ 00:17:52.864 { 00:17:52.864 "name": "spare", 00:17:52.864 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:52.864 "is_configured": true, 00:17:52.864 "data_offset": 0, 00:17:52.864 "data_size": 65536 00:17:52.864 }, 00:17:52.864 { 00:17:52.864 "name": null, 00:17:52.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.864 "is_configured": false, 00:17:52.864 "data_offset": 0, 00:17:52.864 "data_size": 65536 00:17:52.864 }, 00:17:52.864 { 00:17:52.864 "name": "BaseBdev3", 00:17:52.864 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:52.864 "is_configured": true, 00:17:52.864 "data_offset": 0, 00:17:52.864 "data_size": 65536 00:17:52.864 }, 00:17:52.864 { 00:17:52.864 "name": "BaseBdev4", 00:17:52.864 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:52.864 "is_configured": true, 00:17:52.864 "data_offset": 0, 00:17:52.864 "data_size": 65536 00:17:52.864 } 00:17:52.864 ] 00:17:52.864 }' 00:17:52.864 07:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.864 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.122 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.122 "name": "raid_bdev1", 00:17:53.122 "uuid": "aba2f8f5-aba6-45e8-9833-877678f758c7", 00:17:53.122 "strip_size_kb": 0, 00:17:53.122 "state": "online", 00:17:53.122 "raid_level": "raid1", 00:17:53.122 "superblock": false, 00:17:53.122 "num_base_bdevs": 4, 00:17:53.122 "num_base_bdevs_discovered": 3, 00:17:53.122 "num_base_bdevs_operational": 3, 00:17:53.122 "base_bdevs_list": [ 00:17:53.122 { 00:17:53.122 "name": "spare", 00:17:53.122 "uuid": "d88a972b-021b-597d-93aa-8492988da15d", 00:17:53.122 "is_configured": true, 00:17:53.122 "data_offset": 0, 00:17:53.122 "data_size": 65536 00:17:53.122 }, 00:17:53.122 { 00:17:53.122 "name": null, 00:17:53.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.122 "is_configured": false, 00:17:53.122 "data_offset": 0, 00:17:53.122 "data_size": 65536 00:17:53.122 }, 00:17:53.122 { 00:17:53.122 "name": "BaseBdev3", 00:17:53.122 "uuid": "15ff4a02-6af4-59f8-b845-4bb4f3449ff5", 00:17:53.122 "is_configured": true, 00:17:53.122 "data_offset": 0, 00:17:53.122 "data_size": 65536 00:17:53.122 }, 00:17:53.122 { 00:17:53.122 "name": "BaseBdev4", 00:17:53.122 "uuid": "914a1bf2-2bbe-5cc9-b6fe-817e7a48eb8c", 00:17:53.122 "is_configured": true, 00:17:53.122 "data_offset": 0, 00:17:53.122 "data_size": 65536 00:17:53.122 } 00:17:53.122 ] 00:17:53.122 }' 00:17:53.122 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.122 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.381 79.12 IOPS, 237.38 MiB/s [2024-11-20T07:14:35.646Z] 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.381 [2024-11-20 07:14:35.520479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.381 [2024-11-20 07:14:35.520560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.381 00:17:53.381 Latency(us) 00:17:53.381 [2024-11-20T07:14:35.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.381 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:53.381 raid_bdev1 : 8.19 78.05 234.16 0.00 0.00 17622.26 402.45 122715.44 00:17:53.381 [2024-11-20T07:14:35.646Z] =================================================================================================================== 00:17:53.381 [2024-11-20T07:14:35.646Z] Total : 78.05 234.16 0.00 0.00 17622.26 402.45 122715.44 00:17:53.381 [2024-11-20 07:14:35.605988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.381 [2024-11-20 07:14:35.606120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.381 [2024-11-20 07:14:35.606268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:17:53.381 "results": [ 00:17:53.381 { 00:17:53.381 "job": "raid_bdev1", 00:17:53.381 "core_mask": "0x1", 00:17:53.381 "workload": "randrw", 00:17:53.381 "percentage": 50, 00:17:53.381 "status": "finished", 00:17:53.381 "queue_depth": 2, 00:17:53.381 "io_size": 3145728, 00:17:53.381 "runtime": 8.18661, 00:17:53.381 "iops": 78.05428620637846, 00:17:53.381 "mibps": 234.1628586191354, 00:17:53.381 "io_failed": 0, 00:17:53.381 "io_timeout": 0, 00:17:53.381 "avg_latency_us": 17622.25687243305, 00:17:53.381 "min_latency_us": 402.44541484716154, 00:17:53.381 "max_latency_us": 122715.44454148471 00:17:53.381 } 00:17:53.381 ], 00:17:53.381 "core_count": 1 00:17:53.381 } 00:17:53.381 ee all in destruct 00:17:53.381 [2024-11-20 07:14:35.606382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.381 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.639 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:53.640 /dev/nbd0 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:53.640 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:53.899 1+0 records in 00:17:53.899 1+0 records out 00:17:53.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666025 s, 6.1 MB/s 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:53.899 07:14:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:53.899 /dev/nbd1 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:53.899 1+0 records in 00:17:53.899 1+0 records out 00:17:53.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026406 s, 15.5 MB/s 00:17:53.899 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.160 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:54.421 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.422 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:54.681 /dev/nbd1 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:54.681 1+0 records in 00:17:54.681 1+0 records out 00:17:54.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264164 s, 15.5 MB/s 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.681 07:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.941 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.201 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79214 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79214 ']' 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79214 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79214 00:17:55.461 killing process with pid 79214 00:17:55.461 Received shutdown signal, test time was about 10.172049 seconds 00:17:55.461 00:17:55.461 Latency(us) 00:17:55.461 [2024-11-20T07:14:37.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.461 [2024-11-20T07:14:37.726Z] =================================================================================================================== 00:17:55.461 [2024-11-20T07:14:37.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79214' 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79214 00:17:55.461 [2024-11-20 07:14:37.561827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.461 07:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79214 00:17:56.030 [2024-11-20 07:14:37.989473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.972 07:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:56.972 00:17:56.972 real 0m13.830s 00:17:56.972 user 0m17.470s 00:17:56.972 sys 0m1.927s 00:17:56.972 ************************************ 00:17:56.972 END TEST raid_rebuild_test_io 00:17:56.972 ************************************ 00:17:56.972 07:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.972 07:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.231 07:14:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:57.231 07:14:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:57.231 07:14:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.231 07:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.231 ************************************ 00:17:57.231 START TEST raid_rebuild_test_sb_io 00:17:57.231 ************************************ 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.231 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79629 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79629 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79629 ']' 00:17:57.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.232 07:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.232 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:57.232 Zero copy mechanism will not be used. 00:17:57.232 [2024-11-20 07:14:39.373949] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:57.232 [2024-11-20 07:14:39.374068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79629 ] 00:17:57.490 [2024-11-20 07:14:39.550225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.490 [2024-11-20 07:14:39.667666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.749 [2024-11-20 07:14:39.867332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.749 [2024-11-20 07:14:39.867402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.316 BaseBdev1_malloc 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.316 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.316 [2024-11-20 07:14:40.345689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:58.317 [2024-11-20 07:14:40.345823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.317 [2024-11-20 07:14:40.345871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.317 [2024-11-20 07:14:40.345909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.317 [2024-11-20 07:14:40.348100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.317 [2024-11-20 07:14:40.348177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:58.317 BaseBdev1 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 BaseBdev2_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 [2024-11-20 07:14:40.403294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:58.317 [2024-11-20 07:14:40.403415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.317 [2024-11-20 07:14:40.403460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.317 [2024-11-20 07:14:40.403495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.317 [2024-11-20 07:14:40.405862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.317 [2024-11-20 07:14:40.405945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:58.317 BaseBdev2 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 BaseBdev3_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 [2024-11-20 07:14:40.493296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:58.317 [2024-11-20 07:14:40.493422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.317 [2024-11-20 07:14:40.493463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.317 [2024-11-20 07:14:40.493505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.317 [2024-11-20 07:14:40.495560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.317 [2024-11-20 07:14:40.495632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:58.317 BaseBdev3 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 BaseBdev4_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 [2024-11-20 07:14:40.547068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:58.317 [2024-11-20 07:14:40.547188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.317 [2024-11-20 07:14:40.547243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:58.317 [2024-11-20 07:14:40.547279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.317 [2024-11-20 07:14:40.549503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.317 [2024-11-20 07:14:40.549583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:58.317 BaseBdev4 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.317 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.575 spare_malloc 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.575 spare_delay 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.575 [2024-11-20 07:14:40.614200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.575 [2024-11-20 07:14:40.614300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.575 [2024-11-20 07:14:40.614367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:58.575 [2024-11-20 07:14:40.614381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.575 [2024-11-20 07:14:40.616449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.575 [2024-11-20 07:14:40.616489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.575 spare 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.575 [2024-11-20 07:14:40.626231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.575 [2024-11-20 07:14:40.628127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.575 [2024-11-20 07:14:40.628240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.575 [2024-11-20 07:14:40.628320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:58.575 [2024-11-20 07:14:40.628552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:58.575 [2024-11-20 07:14:40.628605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:58.575 [2024-11-20 07:14:40.628876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.575 [2024-11-20 07:14:40.629096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:58.575 [2024-11-20 07:14:40.629141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:58.575 [2024-11-20 07:14:40.629340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.575 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.576 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.576 "name": "raid_bdev1", 00:17:58.576 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:17:58.576 "strip_size_kb": 0, 00:17:58.576 "state": "online", 00:17:58.576 "raid_level": "raid1", 00:17:58.576 "superblock": true, 00:17:58.576 "num_base_bdevs": 4, 00:17:58.576 "num_base_bdevs_discovered": 4, 00:17:58.576 "num_base_bdevs_operational": 4, 00:17:58.576 "base_bdevs_list": [ 00:17:58.576 { 00:17:58.576 "name": "BaseBdev1", 00:17:58.576 "uuid": "2de28cb1-c4a2-5409-85be-5a5701b34233", 00:17:58.576 "is_configured": true, 00:17:58.576 "data_offset": 2048, 00:17:58.576 "data_size": 63488 00:17:58.576 }, 00:17:58.576 { 00:17:58.576 "name": "BaseBdev2", 00:17:58.576 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:17:58.576 "is_configured": true, 00:17:58.576 "data_offset": 2048, 00:17:58.576 "data_size": 63488 00:17:58.576 }, 00:17:58.576 { 00:17:58.576 "name": "BaseBdev3", 00:17:58.576 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:17:58.576 "is_configured": true, 00:17:58.576 "data_offset": 2048, 00:17:58.576 "data_size": 63488 00:17:58.576 }, 00:17:58.576 { 00:17:58.576 "name": "BaseBdev4", 00:17:58.576 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:17:58.576 "is_configured": true, 00:17:58.576 "data_offset": 2048, 00:17:58.576 "data_size": 63488 00:17:58.576 } 00:17:58.576 ] 00:17:58.576 }' 00:17:58.576 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.576 07:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 [2024-11-20 07:14:41.125823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 [2024-11-20 07:14:41.225223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.144 "name": "raid_bdev1", 00:17:59.144 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:17:59.144 "strip_size_kb": 0, 00:17:59.144 "state": "online", 00:17:59.144 "raid_level": "raid1", 00:17:59.144 "superblock": true, 00:17:59.144 "num_base_bdevs": 4, 00:17:59.144 "num_base_bdevs_discovered": 3, 00:17:59.144 "num_base_bdevs_operational": 3, 00:17:59.144 "base_bdevs_list": [ 00:17:59.144 { 00:17:59.144 "name": null, 00:17:59.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.144 "is_configured": false, 00:17:59.144 "data_offset": 0, 00:17:59.144 "data_size": 63488 00:17:59.144 }, 00:17:59.144 { 00:17:59.144 "name": "BaseBdev2", 00:17:59.144 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:17:59.144 "is_configured": true, 00:17:59.144 "data_offset": 2048, 00:17:59.144 "data_size": 63488 00:17:59.144 }, 00:17:59.144 { 00:17:59.144 "name": "BaseBdev3", 00:17:59.144 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:17:59.144 "is_configured": true, 00:17:59.144 "data_offset": 2048, 00:17:59.144 "data_size": 63488 00:17:59.144 }, 00:17:59.144 { 00:17:59.144 "name": "BaseBdev4", 00:17:59.144 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:17:59.144 "is_configured": true, 00:17:59.144 "data_offset": 2048, 00:17:59.144 "data_size": 63488 00:17:59.144 } 00:17:59.144 ] 00:17:59.144 }' 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.144 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.144 [2024-11-20 07:14:41.329138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:59.144 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:59.144 Zero copy mechanism will not be used. 00:17:59.144 Running I/O for 60 seconds... 00:17:59.403 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.403 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.403 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.403 [2024-11-20 07:14:41.634437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.661 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.661 07:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:59.661 [2024-11-20 07:14:41.692158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:59.661 [2024-11-20 07:14:41.694169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.661 [2024-11-20 07:14:41.810564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:59.661 [2024-11-20 07:14:41.812100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:59.919 [2024-11-20 07:14:42.026088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:59.919 [2024-11-20 07:14:42.026896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:00.177 172.00 IOPS, 516.00 MiB/s [2024-11-20T07:14:42.442Z] [2024-11-20 07:14:42.384184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:00.177 [2024-11-20 07:14:42.384898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:00.434 [2024-11-20 07:14:42.604588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:00.434 [2024-11-20 07:14:42.605076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.434 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.693 "name": "raid_bdev1", 00:18:00.693 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:00.693 "strip_size_kb": 0, 00:18:00.693 "state": "online", 00:18:00.693 "raid_level": "raid1", 00:18:00.693 "superblock": true, 00:18:00.693 "num_base_bdevs": 4, 00:18:00.693 "num_base_bdevs_discovered": 4, 00:18:00.693 "num_base_bdevs_operational": 4, 00:18:00.693 "process": { 00:18:00.693 "type": "rebuild", 00:18:00.693 "target": "spare", 00:18:00.693 "progress": { 00:18:00.693 "blocks": 10240, 00:18:00.693 "percent": 16 00:18:00.693 } 00:18:00.693 }, 00:18:00.693 "base_bdevs_list": [ 00:18:00.693 { 00:18:00.693 "name": "spare", 00:18:00.693 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:00.693 "is_configured": true, 00:18:00.693 "data_offset": 2048, 00:18:00.693 "data_size": 63488 00:18:00.693 }, 00:18:00.693 { 00:18:00.693 "name": "BaseBdev2", 00:18:00.693 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:18:00.693 "is_configured": true, 00:18:00.693 "data_offset": 2048, 00:18:00.693 "data_size": 63488 00:18:00.693 }, 00:18:00.693 { 00:18:00.693 "name": "BaseBdev3", 00:18:00.693 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:00.693 "is_configured": true, 00:18:00.693 "data_offset": 2048, 00:18:00.693 "data_size": 63488 00:18:00.693 }, 00:18:00.693 { 00:18:00.693 "name": "BaseBdev4", 00:18:00.693 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:00.693 "is_configured": true, 00:18:00.693 "data_offset": 2048, 00:18:00.693 "data_size": 63488 00:18:00.693 } 00:18:00.693 ] 00:18:00.693 }' 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.693 [2024-11-20 07:14:42.838076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.693 [2024-11-20 07:14:42.855786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:00.693 [2024-11-20 07:14:42.864520] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.693 [2024-11-20 07:14:42.866846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.693 [2024-11-20 07:14:42.866893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.693 [2024-11-20 07:14:42.866905] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.693 [2024-11-20 07:14:42.898691] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.693 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.952 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.952 "name": "raid_bdev1", 00:18:00.952 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:00.952 "strip_size_kb": 0, 00:18:00.952 "state": "online", 00:18:00.952 "raid_level": "raid1", 00:18:00.952 "superblock": true, 00:18:00.952 "num_base_bdevs": 4, 00:18:00.952 "num_base_bdevs_discovered": 3, 00:18:00.952 "num_base_bdevs_operational": 3, 00:18:00.952 "base_bdevs_list": [ 00:18:00.952 { 00:18:00.952 "name": null, 00:18:00.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.952 "is_configured": false, 00:18:00.952 "data_offset": 0, 00:18:00.952 "data_size": 63488 00:18:00.952 }, 00:18:00.952 { 00:18:00.952 "name": "BaseBdev2", 00:18:00.952 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:18:00.952 "is_configured": true, 00:18:00.952 "data_offset": 2048, 00:18:00.952 "data_size": 63488 00:18:00.952 }, 00:18:00.952 { 00:18:00.952 "name": "BaseBdev3", 00:18:00.952 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:00.952 "is_configured": true, 00:18:00.952 "data_offset": 2048, 00:18:00.952 "data_size": 63488 00:18:00.952 }, 00:18:00.952 { 00:18:00.952 "name": "BaseBdev4", 00:18:00.952 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:00.952 "is_configured": true, 00:18:00.952 "data_offset": 2048, 00:18:00.952 "data_size": 63488 00:18:00.952 } 00:18:00.952 ] 00:18:00.952 }' 00:18:00.952 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.952 07:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.211 155.50 IOPS, 466.50 MiB/s [2024-11-20T07:14:43.476Z] 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.211 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.211 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.211 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.211 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.211 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.212 "name": "raid_bdev1", 00:18:01.212 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:01.212 "strip_size_kb": 0, 00:18:01.212 "state": "online", 00:18:01.212 "raid_level": "raid1", 00:18:01.212 "superblock": true, 00:18:01.212 "num_base_bdevs": 4, 00:18:01.212 "num_base_bdevs_discovered": 3, 00:18:01.212 "num_base_bdevs_operational": 3, 00:18:01.212 "base_bdevs_list": [ 00:18:01.212 { 00:18:01.212 "name": null, 00:18:01.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.212 "is_configured": false, 00:18:01.212 "data_offset": 0, 00:18:01.212 "data_size": 63488 00:18:01.212 }, 00:18:01.212 { 00:18:01.212 "name": "BaseBdev2", 00:18:01.212 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:18:01.212 "is_configured": true, 00:18:01.212 "data_offset": 2048, 00:18:01.212 "data_size": 63488 00:18:01.212 }, 00:18:01.212 { 00:18:01.212 "name": "BaseBdev3", 00:18:01.212 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:01.212 "is_configured": true, 00:18:01.212 "data_offset": 2048, 00:18:01.212 "data_size": 63488 00:18:01.212 }, 00:18:01.212 { 00:18:01.212 "name": "BaseBdev4", 00:18:01.212 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:01.212 "is_configured": true, 00:18:01.212 "data_offset": 2048, 00:18:01.212 "data_size": 63488 00:18:01.212 } 00:18:01.212 ] 00:18:01.212 }' 00:18:01.212 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.471 [2024-11-20 07:14:43.582311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.471 07:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:01.471 [2024-11-20 07:14:43.637692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:01.471 [2024-11-20 07:14:43.639626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.730 [2024-11-20 07:14:43.777113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:01.989 [2024-11-20 07:14:43.997363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:01.989 [2024-11-20 07:14:43.997733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:02.249 154.00 IOPS, 462.00 MiB/s [2024-11-20T07:14:44.514Z] [2024-11-20 07:14:44.373983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:02.249 [2024-11-20 07:14:44.506390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.509 "name": "raid_bdev1", 00:18:02.509 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:02.509 "strip_size_kb": 0, 00:18:02.509 "state": "online", 00:18:02.509 "raid_level": "raid1", 00:18:02.509 "superblock": true, 00:18:02.509 "num_base_bdevs": 4, 00:18:02.509 "num_base_bdevs_discovered": 4, 00:18:02.509 "num_base_bdevs_operational": 4, 00:18:02.509 "process": { 00:18:02.509 "type": "rebuild", 00:18:02.509 "target": "spare", 00:18:02.509 "progress": { 00:18:02.509 "blocks": 10240, 00:18:02.509 "percent": 16 00:18:02.509 } 00:18:02.509 }, 00:18:02.509 "base_bdevs_list": [ 00:18:02.509 { 00:18:02.509 "name": "spare", 00:18:02.509 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:02.509 "is_configured": true, 00:18:02.509 "data_offset": 2048, 00:18:02.509 "data_size": 63488 00:18:02.509 }, 00:18:02.509 { 00:18:02.509 "name": "BaseBdev2", 00:18:02.509 "uuid": "8541d084-bcaf-54b2-b864-2d8851481dcb", 00:18:02.509 "is_configured": true, 00:18:02.509 "data_offset": 2048, 00:18:02.509 "data_size": 63488 00:18:02.509 }, 00:18:02.509 { 00:18:02.509 "name": "BaseBdev3", 00:18:02.509 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:02.509 "is_configured": true, 00:18:02.509 "data_offset": 2048, 00:18:02.509 "data_size": 63488 00:18:02.509 }, 00:18:02.509 { 00:18:02.509 "name": "BaseBdev4", 00:18:02.509 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:02.509 "is_configured": true, 00:18:02.509 "data_offset": 2048, 00:18:02.509 "data_size": 63488 00:18:02.509 } 00:18:02.509 ] 00:18:02.509 }' 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.509 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.767 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:02.768 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.768 07:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.768 [2024-11-20 07:14:44.789142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.768 [2024-11-20 07:14:44.848895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:03.027 [2024-11-20 07:14:45.065755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:03.027 [2024-11-20 07:14:45.065817] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.027 "name": "raid_bdev1", 00:18:03.027 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:03.027 "strip_size_kb": 0, 00:18:03.027 "state": "online", 00:18:03.027 "raid_level": "raid1", 00:18:03.027 "superblock": true, 00:18:03.027 "num_base_bdevs": 4, 00:18:03.027 "num_base_bdevs_discovered": 3, 00:18:03.027 "num_base_bdevs_operational": 3, 00:18:03.027 "process": { 00:18:03.027 "type": "rebuild", 00:18:03.027 "target": "spare", 00:18:03.027 "progress": { 00:18:03.027 "blocks": 14336, 00:18:03.027 "percent": 22 00:18:03.027 } 00:18:03.027 }, 00:18:03.027 "base_bdevs_list": [ 00:18:03.027 { 00:18:03.027 "name": "spare", 00:18:03.027 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": null, 00:18:03.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.027 "is_configured": false, 00:18:03.027 "data_offset": 0, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": "BaseBdev3", 00:18:03.027 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": "BaseBdev4", 00:18:03.027 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 } 00:18:03.027 ] 00:18:03.027 }' 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.027 [2024-11-20 07:14:45.185447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=521 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.027 "name": "raid_bdev1", 00:18:03.027 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:03.027 "strip_size_kb": 0, 00:18:03.027 "state": "online", 00:18:03.027 "raid_level": "raid1", 00:18:03.027 "superblock": true, 00:18:03.027 "num_base_bdevs": 4, 00:18:03.027 "num_base_bdevs_discovered": 3, 00:18:03.027 "num_base_bdevs_operational": 3, 00:18:03.027 "process": { 00:18:03.027 "type": "rebuild", 00:18:03.027 "target": "spare", 00:18:03.027 "progress": { 00:18:03.027 "blocks": 16384, 00:18:03.027 "percent": 25 00:18:03.027 } 00:18:03.027 }, 00:18:03.027 "base_bdevs_list": [ 00:18:03.027 { 00:18:03.027 "name": "spare", 00:18:03.027 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": null, 00:18:03.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.027 "is_configured": false, 00:18:03.027 "data_offset": 0, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": "BaseBdev3", 00:18:03.027 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 }, 00:18:03.027 { 00:18:03.027 "name": "BaseBdev4", 00:18:03.027 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:03.027 "is_configured": true, 00:18:03.027 "data_offset": 2048, 00:18:03.027 "data_size": 63488 00:18:03.027 } 00:18:03.027 ] 00:18:03.027 }' 00:18:03.027 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.288 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.288 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.288 130.25 IOPS, 390.75 MiB/s [2024-11-20T07:14:45.553Z] 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.288 07:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.288 [2024-11-20 07:14:45.435644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:03.549 [2024-11-20 07:14:45.645601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:03.549 [2024-11-20 07:14:45.646178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:03.897 [2024-11-20 07:14:46.061568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:04.163 114.80 IOPS, 344.40 MiB/s [2024-11-20T07:14:46.428Z] 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.163 [2024-11-20 07:14:46.391184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.163 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.163 "name": "raid_bdev1", 00:18:04.163 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:04.163 "strip_size_kb": 0, 00:18:04.163 "state": "online", 00:18:04.163 "raid_level": "raid1", 00:18:04.163 "superblock": true, 00:18:04.163 "num_base_bdevs": 4, 00:18:04.163 "num_base_bdevs_discovered": 3, 00:18:04.163 "num_base_bdevs_operational": 3, 00:18:04.163 "process": { 00:18:04.163 "type": "rebuild", 00:18:04.163 "target": "spare", 00:18:04.163 "progress": { 00:18:04.163 "blocks": 30720, 00:18:04.163 "percent": 48 00:18:04.163 } 00:18:04.163 }, 00:18:04.163 "base_bdevs_list": [ 00:18:04.163 { 00:18:04.163 "name": "spare", 00:18:04.163 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:04.163 "is_configured": true, 00:18:04.163 "data_offset": 2048, 00:18:04.163 "data_size": 63488 00:18:04.163 }, 00:18:04.163 { 00:18:04.163 "name": null, 00:18:04.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.163 "is_configured": false, 00:18:04.163 "data_offset": 0, 00:18:04.163 "data_size": 63488 00:18:04.163 }, 00:18:04.163 { 00:18:04.163 "name": "BaseBdev3", 00:18:04.163 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:04.163 "is_configured": true, 00:18:04.163 "data_offset": 2048, 00:18:04.163 "data_size": 63488 00:18:04.163 }, 00:18:04.163 { 00:18:04.163 "name": "BaseBdev4", 00:18:04.163 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:04.163 "is_configured": true, 00:18:04.163 "data_offset": 2048, 00:18:04.163 "data_size": 63488 00:18:04.163 } 00:18:04.163 ] 00:18:04.163 }' 00:18:04.423 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.423 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.423 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.423 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.423 07:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.994 [2024-11-20 07:14:47.202378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:05.515 103.33 IOPS, 310.00 MiB/s [2024-11-20T07:14:47.780Z] 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.515 "name": "raid_bdev1", 00:18:05.515 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:05.515 "strip_size_kb": 0, 00:18:05.515 "state": "online", 00:18:05.515 "raid_level": "raid1", 00:18:05.515 "superblock": true, 00:18:05.515 "num_base_bdevs": 4, 00:18:05.515 "num_base_bdevs_discovered": 3, 00:18:05.515 "num_base_bdevs_operational": 3, 00:18:05.515 "process": { 00:18:05.515 "type": "rebuild", 00:18:05.515 "target": "spare", 00:18:05.515 "progress": { 00:18:05.515 "blocks": 53248, 00:18:05.515 "percent": 83 00:18:05.515 } 00:18:05.515 }, 00:18:05.515 "base_bdevs_list": [ 00:18:05.515 { 00:18:05.515 "name": "spare", 00:18:05.515 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:05.515 "is_configured": true, 00:18:05.515 "data_offset": 2048, 00:18:05.515 "data_size": 63488 00:18:05.515 }, 00:18:05.515 { 00:18:05.515 "name": null, 00:18:05.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.515 "is_configured": false, 00:18:05.515 "data_offset": 0, 00:18:05.515 "data_size": 63488 00:18:05.515 }, 00:18:05.515 { 00:18:05.515 "name": "BaseBdev3", 00:18:05.515 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:05.515 "is_configured": true, 00:18:05.515 "data_offset": 2048, 00:18:05.515 "data_size": 63488 00:18:05.515 }, 00:18:05.515 { 00:18:05.515 "name": "BaseBdev4", 00:18:05.515 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:05.515 "is_configured": true, 00:18:05.515 "data_offset": 2048, 00:18:05.515 "data_size": 63488 00:18:05.515 } 00:18:05.515 ] 00:18:05.515 }' 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.515 07:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.139 [2024-11-20 07:14:48.091030] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:06.139 [2024-11-20 07:14:48.196658] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:06.139 [2024-11-20 07:14:48.201122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.406 92.86 IOPS, 278.57 MiB/s [2024-11-20T07:14:48.671Z] 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.406 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.406 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.406 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.406 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.406 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.667 "name": "raid_bdev1", 00:18:06.667 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:06.667 "strip_size_kb": 0, 00:18:06.667 "state": "online", 00:18:06.667 "raid_level": "raid1", 00:18:06.667 "superblock": true, 00:18:06.667 "num_base_bdevs": 4, 00:18:06.667 "num_base_bdevs_discovered": 3, 00:18:06.667 "num_base_bdevs_operational": 3, 00:18:06.667 "base_bdevs_list": [ 00:18:06.667 { 00:18:06.667 "name": "spare", 00:18:06.667 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": null, 00:18:06.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.667 "is_configured": false, 00:18:06.667 "data_offset": 0, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": "BaseBdev3", 00:18:06.667 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": "BaseBdev4", 00:18:06.667 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 } 00:18:06.667 ] 00:18:06.667 }' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.667 "name": "raid_bdev1", 00:18:06.667 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:06.667 "strip_size_kb": 0, 00:18:06.667 "state": "online", 00:18:06.667 "raid_level": "raid1", 00:18:06.667 "superblock": true, 00:18:06.667 "num_base_bdevs": 4, 00:18:06.667 "num_base_bdevs_discovered": 3, 00:18:06.667 "num_base_bdevs_operational": 3, 00:18:06.667 "base_bdevs_list": [ 00:18:06.667 { 00:18:06.667 "name": "spare", 00:18:06.667 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": null, 00:18:06.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.667 "is_configured": false, 00:18:06.667 "data_offset": 0, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": "BaseBdev3", 00:18:06.667 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 }, 00:18:06.667 { 00:18:06.667 "name": "BaseBdev4", 00:18:06.667 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:06.667 "is_configured": true, 00:18:06.667 "data_offset": 2048, 00:18:06.667 "data_size": 63488 00:18:06.667 } 00:18:06.667 ] 00:18:06.667 }' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.667 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.927 07:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.927 "name": "raid_bdev1", 00:18:06.927 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:06.927 "strip_size_kb": 0, 00:18:06.927 "state": "online", 00:18:06.927 "raid_level": "raid1", 00:18:06.927 "superblock": true, 00:18:06.927 "num_base_bdevs": 4, 00:18:06.927 "num_base_bdevs_discovered": 3, 00:18:06.927 "num_base_bdevs_operational": 3, 00:18:06.927 "base_bdevs_list": [ 00:18:06.927 { 00:18:06.927 "name": "spare", 00:18:06.927 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:06.927 "is_configured": true, 00:18:06.927 "data_offset": 2048, 00:18:06.927 "data_size": 63488 00:18:06.927 }, 00:18:06.927 { 00:18:06.927 "name": null, 00:18:06.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.927 "is_configured": false, 00:18:06.927 "data_offset": 0, 00:18:06.927 "data_size": 63488 00:18:06.927 }, 00:18:06.927 { 00:18:06.927 "name": "BaseBdev3", 00:18:06.927 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:06.927 "is_configured": true, 00:18:06.927 "data_offset": 2048, 00:18:06.927 "data_size": 63488 00:18:06.927 }, 00:18:06.927 { 00:18:06.927 "name": "BaseBdev4", 00:18:06.927 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:06.927 "is_configured": true, 00:18:06.927 "data_offset": 2048, 00:18:06.927 "data_size": 63488 00:18:06.927 } 00:18:06.927 ] 00:18:06.927 }' 00:18:06.927 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.927 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.188 85.50 IOPS, 256.50 MiB/s [2024-11-20T07:14:49.453Z] 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.188 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.188 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.188 [2024-11-20 07:14:49.411626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.188 [2024-11-20 07:14:49.411668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.446 00:18:07.446 Latency(us) 00:18:07.446 [2024-11-20T07:14:49.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.446 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:07.446 raid_bdev1 : 8.19 84.11 252.32 0.00 0.00 16172.88 339.84 111268.11 00:18:07.446 [2024-11-20T07:14:49.711Z] =================================================================================================================== 00:18:07.446 [2024-11-20T07:14:49.711Z] Total : 84.11 252.32 0.00 0.00 16172.88 339.84 111268.11 00:18:07.446 [2024-11-20 07:14:49.533442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.446 [2024-11-20 07:14:49.533507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.446 [2024-11-20 07:14:49.533624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.446 [2024-11-20 07:14:49.533638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.446 { 00:18:07.446 "results": [ 00:18:07.446 { 00:18:07.446 "job": "raid_bdev1", 00:18:07.446 "core_mask": "0x1", 00:18:07.446 "workload": "randrw", 00:18:07.446 "percentage": 50, 00:18:07.446 "status": "finished", 00:18:07.446 "queue_depth": 2, 00:18:07.446 "io_size": 3145728, 00:18:07.446 "runtime": 8.191887, 00:18:07.446 "iops": 84.10760548821047, 00:18:07.446 "mibps": 252.3228164646314, 00:18:07.446 "io_failed": 0, 00:18:07.446 "io_timeout": 0, 00:18:07.446 "avg_latency_us": 16172.87873191322, 00:18:07.446 "min_latency_us": 339.8427947598253, 00:18:07.446 "max_latency_us": 111268.10829694323 00:18:07.446 } 00:18:07.446 ], 00:18:07.446 "core_count": 1 00:18:07.446 } 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.446 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.447 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:07.706 /dev/nbd0 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.706 1+0 records in 00:18:07.706 1+0 records out 00:18:07.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266506 s, 15.4 MB/s 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.706 07:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:07.964 /dev/nbd1 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.964 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.965 1+0 records in 00:18:07.965 1+0 records out 00:18:07.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384272 s, 10.7 MB/s 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.965 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.226 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:08.490 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:08.755 /dev/nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.755 1+0 records in 00:18:08.755 1+0 records out 00:18:08.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297268 s, 13.8 MB/s 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.755 07:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.022 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 [2024-11-20 07:14:51.483895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.291 [2024-11-20 07:14:51.483977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.291 [2024-11-20 07:14:51.484003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:09.291 [2024-11-20 07:14:51.484017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.291 [2024-11-20 07:14:51.486582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.291 [2024-11-20 07:14:51.486634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.291 [2024-11-20 07:14:51.486748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.291 [2024-11-20 07:14:51.486822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.291 [2024-11-20 07:14:51.486965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.291 [2024-11-20 07:14:51.487090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.291 spare 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 [2024-11-20 07:14:51.587015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:09.560 [2024-11-20 07:14:51.587085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:09.560 [2024-11-20 07:14:51.587481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:09.560 [2024-11-20 07:14:51.587793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:09.560 [2024-11-20 07:14:51.587814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:09.560 [2024-11-20 07:14:51.588071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.560 "name": "raid_bdev1", 00:18:09.560 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:09.560 "strip_size_kb": 0, 00:18:09.560 "state": "online", 00:18:09.560 "raid_level": "raid1", 00:18:09.560 "superblock": true, 00:18:09.560 "num_base_bdevs": 4, 00:18:09.560 "num_base_bdevs_discovered": 3, 00:18:09.560 "num_base_bdevs_operational": 3, 00:18:09.560 "base_bdevs_list": [ 00:18:09.560 { 00:18:09.560 "name": "spare", 00:18:09.560 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:09.560 "is_configured": true, 00:18:09.560 "data_offset": 2048, 00:18:09.560 "data_size": 63488 00:18:09.560 }, 00:18:09.560 { 00:18:09.560 "name": null, 00:18:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.560 "is_configured": false, 00:18:09.560 "data_offset": 2048, 00:18:09.560 "data_size": 63488 00:18:09.560 }, 00:18:09.560 { 00:18:09.560 "name": "BaseBdev3", 00:18:09.560 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:09.560 "is_configured": true, 00:18:09.560 "data_offset": 2048, 00:18:09.560 "data_size": 63488 00:18:09.560 }, 00:18:09.560 { 00:18:09.560 "name": "BaseBdev4", 00:18:09.560 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:09.560 "is_configured": true, 00:18:09.560 "data_offset": 2048, 00:18:09.560 "data_size": 63488 00:18:09.560 } 00:18:09.560 ] 00:18:09.560 }' 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.560 07:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.831 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.122 "name": "raid_bdev1", 00:18:10.122 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:10.122 "strip_size_kb": 0, 00:18:10.122 "state": "online", 00:18:10.122 "raid_level": "raid1", 00:18:10.122 "superblock": true, 00:18:10.122 "num_base_bdevs": 4, 00:18:10.122 "num_base_bdevs_discovered": 3, 00:18:10.122 "num_base_bdevs_operational": 3, 00:18:10.122 "base_bdevs_list": [ 00:18:10.122 { 00:18:10.122 "name": "spare", 00:18:10.122 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:10.122 "is_configured": true, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": null, 00:18:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.122 "is_configured": false, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": "BaseBdev3", 00:18:10.122 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:10.122 "is_configured": true, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": "BaseBdev4", 00:18:10.122 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:10.122 "is_configured": true, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 } 00:18:10.122 ] 00:18:10.122 }' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.122 [2024-11-20 07:14:52.267019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.122 "name": "raid_bdev1", 00:18:10.122 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:10.122 "strip_size_kb": 0, 00:18:10.122 "state": "online", 00:18:10.122 "raid_level": "raid1", 00:18:10.122 "superblock": true, 00:18:10.122 "num_base_bdevs": 4, 00:18:10.122 "num_base_bdevs_discovered": 2, 00:18:10.122 "num_base_bdevs_operational": 2, 00:18:10.122 "base_bdevs_list": [ 00:18:10.122 { 00:18:10.122 "name": null, 00:18:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.122 "is_configured": false, 00:18:10.122 "data_offset": 0, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": null, 00:18:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.122 "is_configured": false, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": "BaseBdev3", 00:18:10.122 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:10.122 "is_configured": true, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 }, 00:18:10.122 { 00:18:10.122 "name": "BaseBdev4", 00:18:10.122 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:10.122 "is_configured": true, 00:18:10.122 "data_offset": 2048, 00:18:10.122 "data_size": 63488 00:18:10.122 } 00:18:10.122 ] 00:18:10.122 }' 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.122 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.698 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.698 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.698 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.698 [2024-11-20 07:14:52.730329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.698 [2024-11-20 07:14:52.730556] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:10.698 [2024-11-20 07:14:52.730580] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:10.698 [2024-11-20 07:14:52.730617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.698 [2024-11-20 07:14:52.746891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:10.698 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.698 07:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:10.698 [2024-11-20 07:14:52.748965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.635 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.635 "name": "raid_bdev1", 00:18:11.635 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:11.635 "strip_size_kb": 0, 00:18:11.635 "state": "online", 00:18:11.635 "raid_level": "raid1", 00:18:11.635 "superblock": true, 00:18:11.635 "num_base_bdevs": 4, 00:18:11.635 "num_base_bdevs_discovered": 3, 00:18:11.635 "num_base_bdevs_operational": 3, 00:18:11.635 "process": { 00:18:11.635 "type": "rebuild", 00:18:11.635 "target": "spare", 00:18:11.635 "progress": { 00:18:11.635 "blocks": 20480, 00:18:11.635 "percent": 32 00:18:11.635 } 00:18:11.635 }, 00:18:11.635 "base_bdevs_list": [ 00:18:11.635 { 00:18:11.635 "name": "spare", 00:18:11.635 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:11.635 "is_configured": true, 00:18:11.635 "data_offset": 2048, 00:18:11.635 "data_size": 63488 00:18:11.635 }, 00:18:11.635 { 00:18:11.635 "name": null, 00:18:11.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.635 "is_configured": false, 00:18:11.635 "data_offset": 2048, 00:18:11.635 "data_size": 63488 00:18:11.635 }, 00:18:11.635 { 00:18:11.635 "name": "BaseBdev3", 00:18:11.635 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:11.635 "is_configured": true, 00:18:11.635 "data_offset": 2048, 00:18:11.635 "data_size": 63488 00:18:11.635 }, 00:18:11.635 { 00:18:11.636 "name": "BaseBdev4", 00:18:11.636 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:11.636 "is_configured": true, 00:18:11.636 "data_offset": 2048, 00:18:11.636 "data_size": 63488 00:18:11.636 } 00:18:11.636 ] 00:18:11.636 }' 00:18:11.636 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.636 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.636 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.895 [2024-11-20 07:14:53.912743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.895 [2024-11-20 07:14:53.954966] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.895 [2024-11-20 07:14:53.955072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.895 [2024-11-20 07:14:53.955090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.895 [2024-11-20 07:14:53.955100] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.895 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.896 07:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.896 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.896 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.896 "name": "raid_bdev1", 00:18:11.896 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:11.896 "strip_size_kb": 0, 00:18:11.896 "state": "online", 00:18:11.896 "raid_level": "raid1", 00:18:11.896 "superblock": true, 00:18:11.896 "num_base_bdevs": 4, 00:18:11.896 "num_base_bdevs_discovered": 2, 00:18:11.896 "num_base_bdevs_operational": 2, 00:18:11.896 "base_bdevs_list": [ 00:18:11.896 { 00:18:11.896 "name": null, 00:18:11.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.896 "is_configured": false, 00:18:11.896 "data_offset": 0, 00:18:11.896 "data_size": 63488 00:18:11.896 }, 00:18:11.896 { 00:18:11.896 "name": null, 00:18:11.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.896 "is_configured": false, 00:18:11.896 "data_offset": 2048, 00:18:11.896 "data_size": 63488 00:18:11.896 }, 00:18:11.896 { 00:18:11.896 "name": "BaseBdev3", 00:18:11.896 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:11.896 "is_configured": true, 00:18:11.896 "data_offset": 2048, 00:18:11.896 "data_size": 63488 00:18:11.896 }, 00:18:11.896 { 00:18:11.896 "name": "BaseBdev4", 00:18:11.896 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:11.896 "is_configured": true, 00:18:11.896 "data_offset": 2048, 00:18:11.896 "data_size": 63488 00:18:11.896 } 00:18:11.896 ] 00:18:11.896 }' 00:18:11.896 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.896 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.464 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.464 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.464 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.464 [2024-11-20 07:14:54.470746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.464 [2024-11-20 07:14:54.470826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.464 [2024-11-20 07:14:54.470859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:12.464 [2024-11-20 07:14:54.470872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.464 [2024-11-20 07:14:54.471410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.464 [2024-11-20 07:14:54.471442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.464 [2024-11-20 07:14:54.471546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.464 [2024-11-20 07:14:54.471569] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:12.464 [2024-11-20 07:14:54.471580] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:12.464 [2024-11-20 07:14:54.471604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.464 [2024-11-20 07:14:54.489173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:12.464 spare 00:18:12.464 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.464 07:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:12.464 [2024-11-20 07:14:54.491345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.401 "name": "raid_bdev1", 00:18:13.401 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:13.401 "strip_size_kb": 0, 00:18:13.401 "state": "online", 00:18:13.401 "raid_level": "raid1", 00:18:13.401 "superblock": true, 00:18:13.401 "num_base_bdevs": 4, 00:18:13.401 "num_base_bdevs_discovered": 3, 00:18:13.401 "num_base_bdevs_operational": 3, 00:18:13.401 "process": { 00:18:13.401 "type": "rebuild", 00:18:13.401 "target": "spare", 00:18:13.401 "progress": { 00:18:13.401 "blocks": 20480, 00:18:13.401 "percent": 32 00:18:13.401 } 00:18:13.401 }, 00:18:13.401 "base_bdevs_list": [ 00:18:13.401 { 00:18:13.401 "name": "spare", 00:18:13.401 "uuid": "45ab6353-2997-5688-a219-a7501b8656ed", 00:18:13.401 "is_configured": true, 00:18:13.401 "data_offset": 2048, 00:18:13.401 "data_size": 63488 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "name": null, 00:18:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.401 "is_configured": false, 00:18:13.401 "data_offset": 2048, 00:18:13.401 "data_size": 63488 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "name": "BaseBdev3", 00:18:13.401 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:13.401 "is_configured": true, 00:18:13.401 "data_offset": 2048, 00:18:13.401 "data_size": 63488 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "name": "BaseBdev4", 00:18:13.401 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:13.401 "is_configured": true, 00:18:13.401 "data_offset": 2048, 00:18:13.401 "data_size": 63488 00:18:13.401 } 00:18:13.401 ] 00:18:13.401 }' 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.401 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.401 [2024-11-20 07:14:55.626657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.661 [2024-11-20 07:14:55.697578] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.661 [2024-11-20 07:14:55.697666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.661 [2024-11-20 07:14:55.697686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.661 [2024-11-20 07:14:55.697695] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.661 "name": "raid_bdev1", 00:18:13.661 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:13.661 "strip_size_kb": 0, 00:18:13.661 "state": "online", 00:18:13.661 "raid_level": "raid1", 00:18:13.661 "superblock": true, 00:18:13.661 "num_base_bdevs": 4, 00:18:13.661 "num_base_bdevs_discovered": 2, 00:18:13.661 "num_base_bdevs_operational": 2, 00:18:13.661 "base_bdevs_list": [ 00:18:13.661 { 00:18:13.661 "name": null, 00:18:13.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.661 "is_configured": false, 00:18:13.661 "data_offset": 0, 00:18:13.661 "data_size": 63488 00:18:13.661 }, 00:18:13.661 { 00:18:13.661 "name": null, 00:18:13.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.661 "is_configured": false, 00:18:13.661 "data_offset": 2048, 00:18:13.661 "data_size": 63488 00:18:13.661 }, 00:18:13.661 { 00:18:13.661 "name": "BaseBdev3", 00:18:13.661 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:13.661 "is_configured": true, 00:18:13.661 "data_offset": 2048, 00:18:13.661 "data_size": 63488 00:18:13.661 }, 00:18:13.661 { 00:18:13.661 "name": "BaseBdev4", 00:18:13.661 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:13.661 "is_configured": true, 00:18:13.661 "data_offset": 2048, 00:18:13.661 "data_size": 63488 00:18:13.661 } 00:18:13.661 ] 00:18:13.661 }' 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.661 07:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.987 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.987 "name": "raid_bdev1", 00:18:13.987 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:13.987 "strip_size_kb": 0, 00:18:13.987 "state": "online", 00:18:13.987 "raid_level": "raid1", 00:18:13.987 "superblock": true, 00:18:13.987 "num_base_bdevs": 4, 00:18:13.987 "num_base_bdevs_discovered": 2, 00:18:13.988 "num_base_bdevs_operational": 2, 00:18:13.988 "base_bdevs_list": [ 00:18:13.988 { 00:18:13.988 "name": null, 00:18:13.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.988 "is_configured": false, 00:18:13.988 "data_offset": 0, 00:18:13.988 "data_size": 63488 00:18:13.988 }, 00:18:13.988 { 00:18:13.988 "name": null, 00:18:13.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.988 "is_configured": false, 00:18:13.988 "data_offset": 2048, 00:18:13.988 "data_size": 63488 00:18:13.988 }, 00:18:13.988 { 00:18:13.988 "name": "BaseBdev3", 00:18:13.988 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:13.988 "is_configured": true, 00:18:13.988 "data_offset": 2048, 00:18:13.988 "data_size": 63488 00:18:13.988 }, 00:18:13.988 { 00:18:13.988 "name": "BaseBdev4", 00:18:13.988 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:13.988 "is_configured": true, 00:18:13.988 "data_offset": 2048, 00:18:13.988 "data_size": 63488 00:18:13.988 } 00:18:13.988 ] 00:18:13.988 }' 00:18:13.988 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:14.247 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.248 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.248 [2024-11-20 07:14:56.313046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:14.248 [2024-11-20 07:14:56.313154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.248 [2024-11-20 07:14:56.313199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:14.248 [2024-11-20 07:14:56.313236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.248 [2024-11-20 07:14:56.313790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.248 [2024-11-20 07:14:56.313859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.248 [2024-11-20 07:14:56.313965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:14.248 [2024-11-20 07:14:56.313981] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:14.248 [2024-11-20 07:14:56.313995] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:14.248 [2024-11-20 07:14:56.314007] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:14.248 BaseBdev1 00:18:14.248 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.248 07:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.186 "name": "raid_bdev1", 00:18:15.186 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:15.186 "strip_size_kb": 0, 00:18:15.186 "state": "online", 00:18:15.186 "raid_level": "raid1", 00:18:15.186 "superblock": true, 00:18:15.186 "num_base_bdevs": 4, 00:18:15.186 "num_base_bdevs_discovered": 2, 00:18:15.186 "num_base_bdevs_operational": 2, 00:18:15.186 "base_bdevs_list": [ 00:18:15.186 { 00:18:15.186 "name": null, 00:18:15.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.186 "is_configured": false, 00:18:15.186 "data_offset": 0, 00:18:15.186 "data_size": 63488 00:18:15.186 }, 00:18:15.186 { 00:18:15.186 "name": null, 00:18:15.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.186 "is_configured": false, 00:18:15.186 "data_offset": 2048, 00:18:15.186 "data_size": 63488 00:18:15.186 }, 00:18:15.186 { 00:18:15.186 "name": "BaseBdev3", 00:18:15.186 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:15.186 "is_configured": true, 00:18:15.186 "data_offset": 2048, 00:18:15.186 "data_size": 63488 00:18:15.186 }, 00:18:15.186 { 00:18:15.186 "name": "BaseBdev4", 00:18:15.186 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:15.186 "is_configured": true, 00:18:15.186 "data_offset": 2048, 00:18:15.186 "data_size": 63488 00:18:15.186 } 00:18:15.186 ] 00:18:15.186 }' 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.186 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.756 "name": "raid_bdev1", 00:18:15.756 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:15.756 "strip_size_kb": 0, 00:18:15.756 "state": "online", 00:18:15.756 "raid_level": "raid1", 00:18:15.756 "superblock": true, 00:18:15.756 "num_base_bdevs": 4, 00:18:15.756 "num_base_bdevs_discovered": 2, 00:18:15.756 "num_base_bdevs_operational": 2, 00:18:15.756 "base_bdevs_list": [ 00:18:15.756 { 00:18:15.756 "name": null, 00:18:15.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.756 "is_configured": false, 00:18:15.756 "data_offset": 0, 00:18:15.756 "data_size": 63488 00:18:15.756 }, 00:18:15.756 { 00:18:15.756 "name": null, 00:18:15.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.756 "is_configured": false, 00:18:15.756 "data_offset": 2048, 00:18:15.756 "data_size": 63488 00:18:15.756 }, 00:18:15.756 { 00:18:15.756 "name": "BaseBdev3", 00:18:15.756 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:15.756 "is_configured": true, 00:18:15.756 "data_offset": 2048, 00:18:15.756 "data_size": 63488 00:18:15.756 }, 00:18:15.756 { 00:18:15.756 "name": "BaseBdev4", 00:18:15.756 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:15.756 "is_configured": true, 00:18:15.756 "data_offset": 2048, 00:18:15.756 "data_size": 63488 00:18:15.756 } 00:18:15.756 ] 00:18:15.756 }' 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 [2024-11-20 07:14:57.938582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.756 [2024-11-20 07:14:57.938826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:15.756 [2024-11-20 07:14:57.938905] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.756 request: 00:18:15.756 { 00:18:15.756 "base_bdev": "BaseBdev1", 00:18:15.756 "raid_bdev": "raid_bdev1", 00:18:15.756 "method": "bdev_raid_add_base_bdev", 00:18:15.756 "req_id": 1 00:18:15.756 } 00:18:15.756 Got JSON-RPC error response 00:18:15.756 response: 00:18:15.756 { 00:18:15.756 "code": -22, 00:18:15.756 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:15.756 } 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.756 07:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.697 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.987 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.987 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.987 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.987 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.987 07:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.987 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.987 "name": "raid_bdev1", 00:18:16.987 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:16.987 "strip_size_kb": 0, 00:18:16.987 "state": "online", 00:18:16.987 "raid_level": "raid1", 00:18:16.987 "superblock": true, 00:18:16.987 "num_base_bdevs": 4, 00:18:16.987 "num_base_bdevs_discovered": 2, 00:18:16.987 "num_base_bdevs_operational": 2, 00:18:16.987 "base_bdevs_list": [ 00:18:16.987 { 00:18:16.987 "name": null, 00:18:16.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.987 "is_configured": false, 00:18:16.987 "data_offset": 0, 00:18:16.987 "data_size": 63488 00:18:16.987 }, 00:18:16.987 { 00:18:16.987 "name": null, 00:18:16.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.987 "is_configured": false, 00:18:16.987 "data_offset": 2048, 00:18:16.987 "data_size": 63488 00:18:16.987 }, 00:18:16.987 { 00:18:16.987 "name": "BaseBdev3", 00:18:16.987 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:16.987 "is_configured": true, 00:18:16.987 "data_offset": 2048, 00:18:16.987 "data_size": 63488 00:18:16.987 }, 00:18:16.987 { 00:18:16.987 "name": "BaseBdev4", 00:18:16.987 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:16.987 "is_configured": true, 00:18:16.987 "data_offset": 2048, 00:18:16.987 "data_size": 63488 00:18:16.987 } 00:18:16.987 ] 00:18:16.987 }' 00:18:16.987 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.987 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.246 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.246 "name": "raid_bdev1", 00:18:17.246 "uuid": "234ca2a2-3122-432b-b762-f596c3c87de1", 00:18:17.247 "strip_size_kb": 0, 00:18:17.247 "state": "online", 00:18:17.247 "raid_level": "raid1", 00:18:17.247 "superblock": true, 00:18:17.247 "num_base_bdevs": 4, 00:18:17.247 "num_base_bdevs_discovered": 2, 00:18:17.247 "num_base_bdevs_operational": 2, 00:18:17.247 "base_bdevs_list": [ 00:18:17.247 { 00:18:17.247 "name": null, 00:18:17.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.247 "is_configured": false, 00:18:17.247 "data_offset": 0, 00:18:17.247 "data_size": 63488 00:18:17.247 }, 00:18:17.247 { 00:18:17.247 "name": null, 00:18:17.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.247 "is_configured": false, 00:18:17.247 "data_offset": 2048, 00:18:17.247 "data_size": 63488 00:18:17.247 }, 00:18:17.247 { 00:18:17.247 "name": "BaseBdev3", 00:18:17.247 "uuid": "7063fa4e-c02b-5b1a-9765-4066cf57a07c", 00:18:17.247 "is_configured": true, 00:18:17.247 "data_offset": 2048, 00:18:17.247 "data_size": 63488 00:18:17.247 }, 00:18:17.247 { 00:18:17.247 "name": "BaseBdev4", 00:18:17.247 "uuid": "c24ee9c6-59b3-53ae-8790-1ba2f4a55d4f", 00:18:17.247 "is_configured": true, 00:18:17.247 "data_offset": 2048, 00:18:17.247 "data_size": 63488 00:18:17.247 } 00:18:17.247 ] 00:18:17.247 }' 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79629 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79629 ']' 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79629 00:18:17.247 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79629 00:18:17.507 killing process with pid 79629 00:18:17.507 Received shutdown signal, test time was about 18.247497 seconds 00:18:17.507 00:18:17.507 Latency(us) 00:18:17.507 [2024-11-20T07:14:59.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.507 [2024-11-20T07:14:59.772Z] =================================================================================================================== 00:18:17.507 [2024-11-20T07:14:59.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79629' 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79629 00:18:17.507 [2024-11-20 07:14:59.543660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.507 07:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79629 00:18:17.507 [2024-11-20 07:14:59.543809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.507 [2024-11-20 07:14:59.543898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.507 [2024-11-20 07:14:59.543915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:17.767 [2024-11-20 07:14:59.990694] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.147 07:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:19.147 00:18:19.147 real 0m21.945s 00:18:19.147 user 0m28.868s 00:18:19.147 sys 0m2.715s 00:18:19.147 ************************************ 00:18:19.147 END TEST raid_rebuild_test_sb_io 00:18:19.147 ************************************ 00:18:19.147 07:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.147 07:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 07:15:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:19.147 07:15:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:19.147 07:15:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:19.147 07:15:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.147 07:15:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 ************************************ 00:18:19.147 START TEST raid5f_state_function_test 00:18:19.147 ************************************ 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80351 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80351' 00:18:19.147 Process raid pid: 80351 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80351 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80351 ']' 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.147 07:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 [2024-11-20 07:15:01.388310] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:19.147 [2024-11-20 07:15:01.388453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.407 [2024-11-20 07:15:01.571851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.667 [2024-11-20 07:15:01.710485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.927 [2024-11-20 07:15:01.946836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.927 [2024-11-20 07:15:01.946889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.186 [2024-11-20 07:15:02.286686] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.186 [2024-11-20 07:15:02.286748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.186 [2024-11-20 07:15:02.286760] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.186 [2024-11-20 07:15:02.286770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.186 [2024-11-20 07:15:02.286776] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.186 [2024-11-20 07:15:02.286786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.186 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.186 "name": "Existed_Raid", 00:18:20.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.186 "strip_size_kb": 64, 00:18:20.186 "state": "configuring", 00:18:20.186 "raid_level": "raid5f", 00:18:20.186 "superblock": false, 00:18:20.186 "num_base_bdevs": 3, 00:18:20.187 "num_base_bdevs_discovered": 0, 00:18:20.187 "num_base_bdevs_operational": 3, 00:18:20.187 "base_bdevs_list": [ 00:18:20.187 { 00:18:20.187 "name": "BaseBdev1", 00:18:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.187 "is_configured": false, 00:18:20.187 "data_offset": 0, 00:18:20.187 "data_size": 0 00:18:20.187 }, 00:18:20.187 { 00:18:20.187 "name": "BaseBdev2", 00:18:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.187 "is_configured": false, 00:18:20.187 "data_offset": 0, 00:18:20.187 "data_size": 0 00:18:20.187 }, 00:18:20.187 { 00:18:20.187 "name": "BaseBdev3", 00:18:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.187 "is_configured": false, 00:18:20.187 "data_offset": 0, 00:18:20.187 "data_size": 0 00:18:20.187 } 00:18:20.187 ] 00:18:20.187 }' 00:18:20.187 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.187 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.756 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.756 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.756 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.756 [2024-11-20 07:15:02.749919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.756 [2024-11-20 07:15:02.749961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:20.756 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.757 [2024-11-20 07:15:02.761898] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.757 [2024-11-20 07:15:02.761951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.757 [2024-11-20 07:15:02.761962] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.757 [2024-11-20 07:15:02.761973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.757 [2024-11-20 07:15:02.761981] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.757 [2024-11-20 07:15:02.761991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.757 BaseBdev1 00:18:20.757 [2024-11-20 07:15:02.811375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.757 [ 00:18:20.757 { 00:18:20.757 "name": "BaseBdev1", 00:18:20.757 "aliases": [ 00:18:20.757 "55a71908-e56b-4f52-9302-0de467bd593a" 00:18:20.757 ], 00:18:20.757 "product_name": "Malloc disk", 00:18:20.757 "block_size": 512, 00:18:20.757 "num_blocks": 65536, 00:18:20.757 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:20.757 "assigned_rate_limits": { 00:18:20.757 "rw_ios_per_sec": 0, 00:18:20.757 "rw_mbytes_per_sec": 0, 00:18:20.757 "r_mbytes_per_sec": 0, 00:18:20.757 "w_mbytes_per_sec": 0 00:18:20.757 }, 00:18:20.757 "claimed": true, 00:18:20.757 "claim_type": "exclusive_write", 00:18:20.757 "zoned": false, 00:18:20.757 "supported_io_types": { 00:18:20.757 "read": true, 00:18:20.757 "write": true, 00:18:20.757 "unmap": true, 00:18:20.757 "flush": true, 00:18:20.757 "reset": true, 00:18:20.757 "nvme_admin": false, 00:18:20.757 "nvme_io": false, 00:18:20.757 "nvme_io_md": false, 00:18:20.757 "write_zeroes": true, 00:18:20.757 "zcopy": true, 00:18:20.757 "get_zone_info": false, 00:18:20.757 "zone_management": false, 00:18:20.757 "zone_append": false, 00:18:20.757 "compare": false, 00:18:20.757 "compare_and_write": false, 00:18:20.757 "abort": true, 00:18:20.757 "seek_hole": false, 00:18:20.757 "seek_data": false, 00:18:20.757 "copy": true, 00:18:20.757 "nvme_iov_md": false 00:18:20.757 }, 00:18:20.757 "memory_domains": [ 00:18:20.757 { 00:18:20.757 "dma_device_id": "system", 00:18:20.757 "dma_device_type": 1 00:18:20.757 }, 00:18:20.757 { 00:18:20.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.757 "dma_device_type": 2 00:18:20.757 } 00:18:20.757 ], 00:18:20.757 "driver_specific": {} 00:18:20.757 } 00:18:20.757 ] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.757 "name": "Existed_Raid", 00:18:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.757 "strip_size_kb": 64, 00:18:20.757 "state": "configuring", 00:18:20.757 "raid_level": "raid5f", 00:18:20.757 "superblock": false, 00:18:20.757 "num_base_bdevs": 3, 00:18:20.757 "num_base_bdevs_discovered": 1, 00:18:20.757 "num_base_bdevs_operational": 3, 00:18:20.757 "base_bdevs_list": [ 00:18:20.757 { 00:18:20.757 "name": "BaseBdev1", 00:18:20.757 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:20.757 "is_configured": true, 00:18:20.757 "data_offset": 0, 00:18:20.757 "data_size": 65536 00:18:20.757 }, 00:18:20.757 { 00:18:20.757 "name": "BaseBdev2", 00:18:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.757 "is_configured": false, 00:18:20.757 "data_offset": 0, 00:18:20.757 "data_size": 0 00:18:20.757 }, 00:18:20.757 { 00:18:20.757 "name": "BaseBdev3", 00:18:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.757 "is_configured": false, 00:18:20.757 "data_offset": 0, 00:18:20.757 "data_size": 0 00:18:20.757 } 00:18:20.757 ] 00:18:20.757 }' 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.757 07:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.323 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 [2024-11-20 07:15:03.302617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.324 [2024-11-20 07:15:03.302769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 [2024-11-20 07:15:03.314663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.324 [2024-11-20 07:15:03.316807] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.324 [2024-11-20 07:15:03.316942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.324 [2024-11-20 07:15:03.316987] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.324 [2024-11-20 07:15:03.317028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.324 "name": "Existed_Raid", 00:18:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.324 "strip_size_kb": 64, 00:18:21.324 "state": "configuring", 00:18:21.324 "raid_level": "raid5f", 00:18:21.324 "superblock": false, 00:18:21.324 "num_base_bdevs": 3, 00:18:21.324 "num_base_bdevs_discovered": 1, 00:18:21.324 "num_base_bdevs_operational": 3, 00:18:21.324 "base_bdevs_list": [ 00:18:21.324 { 00:18:21.324 "name": "BaseBdev1", 00:18:21.324 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:21.324 "is_configured": true, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 65536 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "name": "BaseBdev2", 00:18:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.324 "is_configured": false, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 0 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "name": "BaseBdev3", 00:18:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.324 "is_configured": false, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 0 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }' 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.324 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 [2024-11-20 07:15:03.739901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.582 BaseBdev2 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.582 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 [ 00:18:21.582 { 00:18:21.582 "name": "BaseBdev2", 00:18:21.582 "aliases": [ 00:18:21.582 "55ff6fbc-314a-4449-9dba-c282cfba364e" 00:18:21.582 ], 00:18:21.582 "product_name": "Malloc disk", 00:18:21.582 "block_size": 512, 00:18:21.582 "num_blocks": 65536, 00:18:21.582 "uuid": "55ff6fbc-314a-4449-9dba-c282cfba364e", 00:18:21.582 "assigned_rate_limits": { 00:18:21.582 "rw_ios_per_sec": 0, 00:18:21.582 "rw_mbytes_per_sec": 0, 00:18:21.582 "r_mbytes_per_sec": 0, 00:18:21.582 "w_mbytes_per_sec": 0 00:18:21.582 }, 00:18:21.582 "claimed": true, 00:18:21.582 "claim_type": "exclusive_write", 00:18:21.582 "zoned": false, 00:18:21.582 "supported_io_types": { 00:18:21.582 "read": true, 00:18:21.582 "write": true, 00:18:21.582 "unmap": true, 00:18:21.582 "flush": true, 00:18:21.582 "reset": true, 00:18:21.583 "nvme_admin": false, 00:18:21.583 "nvme_io": false, 00:18:21.583 "nvme_io_md": false, 00:18:21.583 "write_zeroes": true, 00:18:21.583 "zcopy": true, 00:18:21.583 "get_zone_info": false, 00:18:21.583 "zone_management": false, 00:18:21.583 "zone_append": false, 00:18:21.583 "compare": false, 00:18:21.583 "compare_and_write": false, 00:18:21.583 "abort": true, 00:18:21.583 "seek_hole": false, 00:18:21.583 "seek_data": false, 00:18:21.583 "copy": true, 00:18:21.583 "nvme_iov_md": false 00:18:21.583 }, 00:18:21.583 "memory_domains": [ 00:18:21.583 { 00:18:21.583 "dma_device_id": "system", 00:18:21.583 "dma_device_type": 1 00:18:21.583 }, 00:18:21.583 { 00:18:21.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.583 "dma_device_type": 2 00:18:21.583 } 00:18:21.583 ], 00:18:21.583 "driver_specific": {} 00:18:21.583 } 00:18:21.583 ] 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.583 "name": "Existed_Raid", 00:18:21.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.583 "strip_size_kb": 64, 00:18:21.583 "state": "configuring", 00:18:21.583 "raid_level": "raid5f", 00:18:21.583 "superblock": false, 00:18:21.583 "num_base_bdevs": 3, 00:18:21.583 "num_base_bdevs_discovered": 2, 00:18:21.583 "num_base_bdevs_operational": 3, 00:18:21.583 "base_bdevs_list": [ 00:18:21.583 { 00:18:21.583 "name": "BaseBdev1", 00:18:21.583 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:21.583 "is_configured": true, 00:18:21.583 "data_offset": 0, 00:18:21.583 "data_size": 65536 00:18:21.583 }, 00:18:21.583 { 00:18:21.583 "name": "BaseBdev2", 00:18:21.583 "uuid": "55ff6fbc-314a-4449-9dba-c282cfba364e", 00:18:21.583 "is_configured": true, 00:18:21.583 "data_offset": 0, 00:18:21.583 "data_size": 65536 00:18:21.583 }, 00:18:21.583 { 00:18:21.583 "name": "BaseBdev3", 00:18:21.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.583 "is_configured": false, 00:18:21.583 "data_offset": 0, 00:18:21.583 "data_size": 0 00:18:21.583 } 00:18:21.583 ] 00:18:21.583 }' 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.583 07:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.149 [2024-11-20 07:15:04.217081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.149 [2024-11-20 07:15:04.217252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:22.149 [2024-11-20 07:15:04.217295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:22.149 [2024-11-20 07:15:04.217685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:22.149 [2024-11-20 07:15:04.224241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:22.149 [2024-11-20 07:15:04.224306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:22.149 [2024-11-20 07:15:04.224691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.149 BaseBdev3 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.149 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.150 [ 00:18:22.150 { 00:18:22.150 "name": "BaseBdev3", 00:18:22.150 "aliases": [ 00:18:22.150 "7bd54234-f167-46cb-a6bc-e255ace4a48a" 00:18:22.150 ], 00:18:22.150 "product_name": "Malloc disk", 00:18:22.150 "block_size": 512, 00:18:22.150 "num_blocks": 65536, 00:18:22.150 "uuid": "7bd54234-f167-46cb-a6bc-e255ace4a48a", 00:18:22.150 "assigned_rate_limits": { 00:18:22.150 "rw_ios_per_sec": 0, 00:18:22.150 "rw_mbytes_per_sec": 0, 00:18:22.150 "r_mbytes_per_sec": 0, 00:18:22.150 "w_mbytes_per_sec": 0 00:18:22.150 }, 00:18:22.150 "claimed": true, 00:18:22.150 "claim_type": "exclusive_write", 00:18:22.150 "zoned": false, 00:18:22.150 "supported_io_types": { 00:18:22.150 "read": true, 00:18:22.150 "write": true, 00:18:22.150 "unmap": true, 00:18:22.150 "flush": true, 00:18:22.150 "reset": true, 00:18:22.150 "nvme_admin": false, 00:18:22.150 "nvme_io": false, 00:18:22.150 "nvme_io_md": false, 00:18:22.150 "write_zeroes": true, 00:18:22.150 "zcopy": true, 00:18:22.150 "get_zone_info": false, 00:18:22.150 "zone_management": false, 00:18:22.150 "zone_append": false, 00:18:22.150 "compare": false, 00:18:22.150 "compare_and_write": false, 00:18:22.150 "abort": true, 00:18:22.150 "seek_hole": false, 00:18:22.150 "seek_data": false, 00:18:22.150 "copy": true, 00:18:22.150 "nvme_iov_md": false 00:18:22.150 }, 00:18:22.150 "memory_domains": [ 00:18:22.150 { 00:18:22.150 "dma_device_id": "system", 00:18:22.150 "dma_device_type": 1 00:18:22.150 }, 00:18:22.150 { 00:18:22.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.150 "dma_device_type": 2 00:18:22.150 } 00:18:22.150 ], 00:18:22.150 "driver_specific": {} 00:18:22.150 } 00:18:22.150 ] 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.150 "name": "Existed_Raid", 00:18:22.150 "uuid": "6d1b3ff2-962d-4642-a126-0567c3c1a40f", 00:18:22.150 "strip_size_kb": 64, 00:18:22.150 "state": "online", 00:18:22.150 "raid_level": "raid5f", 00:18:22.150 "superblock": false, 00:18:22.150 "num_base_bdevs": 3, 00:18:22.150 "num_base_bdevs_discovered": 3, 00:18:22.150 "num_base_bdevs_operational": 3, 00:18:22.150 "base_bdevs_list": [ 00:18:22.150 { 00:18:22.150 "name": "BaseBdev1", 00:18:22.150 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:22.150 "is_configured": true, 00:18:22.150 "data_offset": 0, 00:18:22.150 "data_size": 65536 00:18:22.150 }, 00:18:22.150 { 00:18:22.150 "name": "BaseBdev2", 00:18:22.150 "uuid": "55ff6fbc-314a-4449-9dba-c282cfba364e", 00:18:22.150 "is_configured": true, 00:18:22.150 "data_offset": 0, 00:18:22.150 "data_size": 65536 00:18:22.150 }, 00:18:22.150 { 00:18:22.150 "name": "BaseBdev3", 00:18:22.150 "uuid": "7bd54234-f167-46cb-a6bc-e255ace4a48a", 00:18:22.150 "is_configured": true, 00:18:22.150 "data_offset": 0, 00:18:22.150 "data_size": 65536 00:18:22.150 } 00:18:22.150 ] 00:18:22.150 }' 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.150 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.717 [2024-11-20 07:15:04.743990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.717 "name": "Existed_Raid", 00:18:22.717 "aliases": [ 00:18:22.717 "6d1b3ff2-962d-4642-a126-0567c3c1a40f" 00:18:22.717 ], 00:18:22.717 "product_name": "Raid Volume", 00:18:22.717 "block_size": 512, 00:18:22.717 "num_blocks": 131072, 00:18:22.717 "uuid": "6d1b3ff2-962d-4642-a126-0567c3c1a40f", 00:18:22.717 "assigned_rate_limits": { 00:18:22.717 "rw_ios_per_sec": 0, 00:18:22.717 "rw_mbytes_per_sec": 0, 00:18:22.717 "r_mbytes_per_sec": 0, 00:18:22.717 "w_mbytes_per_sec": 0 00:18:22.717 }, 00:18:22.717 "claimed": false, 00:18:22.717 "zoned": false, 00:18:22.717 "supported_io_types": { 00:18:22.717 "read": true, 00:18:22.717 "write": true, 00:18:22.717 "unmap": false, 00:18:22.717 "flush": false, 00:18:22.717 "reset": true, 00:18:22.717 "nvme_admin": false, 00:18:22.717 "nvme_io": false, 00:18:22.717 "nvme_io_md": false, 00:18:22.717 "write_zeroes": true, 00:18:22.717 "zcopy": false, 00:18:22.717 "get_zone_info": false, 00:18:22.717 "zone_management": false, 00:18:22.717 "zone_append": false, 00:18:22.717 "compare": false, 00:18:22.717 "compare_and_write": false, 00:18:22.717 "abort": false, 00:18:22.717 "seek_hole": false, 00:18:22.717 "seek_data": false, 00:18:22.717 "copy": false, 00:18:22.717 "nvme_iov_md": false 00:18:22.717 }, 00:18:22.717 "driver_specific": { 00:18:22.717 "raid": { 00:18:22.717 "uuid": "6d1b3ff2-962d-4642-a126-0567c3c1a40f", 00:18:22.717 "strip_size_kb": 64, 00:18:22.717 "state": "online", 00:18:22.717 "raid_level": "raid5f", 00:18:22.717 "superblock": false, 00:18:22.717 "num_base_bdevs": 3, 00:18:22.717 "num_base_bdevs_discovered": 3, 00:18:22.717 "num_base_bdevs_operational": 3, 00:18:22.717 "base_bdevs_list": [ 00:18:22.717 { 00:18:22.717 "name": "BaseBdev1", 00:18:22.717 "uuid": "55a71908-e56b-4f52-9302-0de467bd593a", 00:18:22.717 "is_configured": true, 00:18:22.717 "data_offset": 0, 00:18:22.717 "data_size": 65536 00:18:22.717 }, 00:18:22.717 { 00:18:22.717 "name": "BaseBdev2", 00:18:22.717 "uuid": "55ff6fbc-314a-4449-9dba-c282cfba364e", 00:18:22.717 "is_configured": true, 00:18:22.717 "data_offset": 0, 00:18:22.717 "data_size": 65536 00:18:22.717 }, 00:18:22.717 { 00:18:22.717 "name": "BaseBdev3", 00:18:22.717 "uuid": "7bd54234-f167-46cb-a6bc-e255ace4a48a", 00:18:22.717 "is_configured": true, 00:18:22.717 "data_offset": 0, 00:18:22.717 "data_size": 65536 00:18:22.717 } 00:18:22.717 ] 00:18:22.717 } 00:18:22.717 } 00:18:22.717 }' 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.717 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:22.717 BaseBdev2 00:18:22.717 BaseBdev3' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.718 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.977 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.977 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.977 07:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:22.977 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.977 07:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.977 [2024-11-20 07:15:04.991389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.977 "name": "Existed_Raid", 00:18:22.977 "uuid": "6d1b3ff2-962d-4642-a126-0567c3c1a40f", 00:18:22.977 "strip_size_kb": 64, 00:18:22.977 "state": "online", 00:18:22.977 "raid_level": "raid5f", 00:18:22.977 "superblock": false, 00:18:22.977 "num_base_bdevs": 3, 00:18:22.977 "num_base_bdevs_discovered": 2, 00:18:22.977 "num_base_bdevs_operational": 2, 00:18:22.977 "base_bdevs_list": [ 00:18:22.977 { 00:18:22.977 "name": null, 00:18:22.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.977 "is_configured": false, 00:18:22.977 "data_offset": 0, 00:18:22.977 "data_size": 65536 00:18:22.977 }, 00:18:22.977 { 00:18:22.977 "name": "BaseBdev2", 00:18:22.977 "uuid": "55ff6fbc-314a-4449-9dba-c282cfba364e", 00:18:22.977 "is_configured": true, 00:18:22.977 "data_offset": 0, 00:18:22.977 "data_size": 65536 00:18:22.977 }, 00:18:22.977 { 00:18:22.977 "name": "BaseBdev3", 00:18:22.977 "uuid": "7bd54234-f167-46cb-a6bc-e255ace4a48a", 00:18:22.977 "is_configured": true, 00:18:22.977 "data_offset": 0, 00:18:22.977 "data_size": 65536 00:18:22.977 } 00:18:22.977 ] 00:18:22.977 }' 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.977 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.545 [2024-11-20 07:15:05.583467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.545 [2024-11-20 07:15:05.583576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.545 [2024-11-20 07:15:05.700704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.545 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.545 [2024-11-20 07:15:05.744718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.545 [2024-11-20 07:15:05.744800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 BaseBdev2 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 [ 00:18:23.804 { 00:18:23.804 "name": "BaseBdev2", 00:18:23.804 "aliases": [ 00:18:23.804 "e946b49c-7eea-4f22-943b-02e1ac0874b9" 00:18:23.804 ], 00:18:23.804 "product_name": "Malloc disk", 00:18:23.804 "block_size": 512, 00:18:23.804 "num_blocks": 65536, 00:18:23.804 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:23.804 "assigned_rate_limits": { 00:18:23.804 "rw_ios_per_sec": 0, 00:18:23.804 "rw_mbytes_per_sec": 0, 00:18:23.804 "r_mbytes_per_sec": 0, 00:18:23.804 "w_mbytes_per_sec": 0 00:18:23.804 }, 00:18:23.804 "claimed": false, 00:18:23.804 "zoned": false, 00:18:23.804 "supported_io_types": { 00:18:23.804 "read": true, 00:18:23.804 "write": true, 00:18:23.804 "unmap": true, 00:18:23.804 "flush": true, 00:18:23.804 "reset": true, 00:18:23.804 "nvme_admin": false, 00:18:23.804 "nvme_io": false, 00:18:23.804 "nvme_io_md": false, 00:18:23.804 "write_zeroes": true, 00:18:23.804 "zcopy": true, 00:18:23.804 "get_zone_info": false, 00:18:23.804 "zone_management": false, 00:18:23.804 "zone_append": false, 00:18:23.804 "compare": false, 00:18:23.804 "compare_and_write": false, 00:18:23.804 "abort": true, 00:18:23.804 "seek_hole": false, 00:18:23.804 "seek_data": false, 00:18:23.804 "copy": true, 00:18:23.804 "nvme_iov_md": false 00:18:23.804 }, 00:18:23.804 "memory_domains": [ 00:18:23.804 { 00:18:23.804 "dma_device_id": "system", 00:18:23.804 "dma_device_type": 1 00:18:23.804 }, 00:18:23.804 { 00:18:23.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.804 "dma_device_type": 2 00:18:23.804 } 00:18:23.804 ], 00:18:23.804 "driver_specific": {} 00:18:23.804 } 00:18:23.804 ] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 BaseBdev3 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.804 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.804 [ 00:18:23.804 { 00:18:23.804 "name": "BaseBdev3", 00:18:23.804 "aliases": [ 00:18:23.804 "7f383bec-17b7-4508-812a-d9e121d5d060" 00:18:23.804 ], 00:18:24.064 "product_name": "Malloc disk", 00:18:24.064 "block_size": 512, 00:18:24.064 "num_blocks": 65536, 00:18:24.064 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:24.064 "assigned_rate_limits": { 00:18:24.064 "rw_ios_per_sec": 0, 00:18:24.064 "rw_mbytes_per_sec": 0, 00:18:24.064 "r_mbytes_per_sec": 0, 00:18:24.064 "w_mbytes_per_sec": 0 00:18:24.064 }, 00:18:24.064 "claimed": false, 00:18:24.064 "zoned": false, 00:18:24.064 "supported_io_types": { 00:18:24.064 "read": true, 00:18:24.064 "write": true, 00:18:24.064 "unmap": true, 00:18:24.064 "flush": true, 00:18:24.064 "reset": true, 00:18:24.064 "nvme_admin": false, 00:18:24.064 "nvme_io": false, 00:18:24.064 "nvme_io_md": false, 00:18:24.064 "write_zeroes": true, 00:18:24.064 "zcopy": true, 00:18:24.064 "get_zone_info": false, 00:18:24.064 "zone_management": false, 00:18:24.064 "zone_append": false, 00:18:24.064 "compare": false, 00:18:24.064 "compare_and_write": false, 00:18:24.064 "abort": true, 00:18:24.064 "seek_hole": false, 00:18:24.064 "seek_data": false, 00:18:24.064 "copy": true, 00:18:24.064 "nvme_iov_md": false 00:18:24.064 }, 00:18:24.064 "memory_domains": [ 00:18:24.064 { 00:18:24.064 "dma_device_id": "system", 00:18:24.064 "dma_device_type": 1 00:18:24.064 }, 00:18:24.064 { 00:18:24.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.064 "dma_device_type": 2 00:18:24.064 } 00:18:24.064 ], 00:18:24.064 "driver_specific": {} 00:18:24.064 } 00:18:24.064 ] 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.064 [2024-11-20 07:15:06.075690] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.064 [2024-11-20 07:15:06.075811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.064 [2024-11-20 07:15:06.075880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.064 [2024-11-20 07:15:06.078258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.064 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.065 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.065 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.065 "name": "Existed_Raid", 00:18:24.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.065 "strip_size_kb": 64, 00:18:24.065 "state": "configuring", 00:18:24.065 "raid_level": "raid5f", 00:18:24.065 "superblock": false, 00:18:24.065 "num_base_bdevs": 3, 00:18:24.065 "num_base_bdevs_discovered": 2, 00:18:24.065 "num_base_bdevs_operational": 3, 00:18:24.065 "base_bdevs_list": [ 00:18:24.065 { 00:18:24.065 "name": "BaseBdev1", 00:18:24.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.065 "is_configured": false, 00:18:24.065 "data_offset": 0, 00:18:24.065 "data_size": 0 00:18:24.065 }, 00:18:24.065 { 00:18:24.065 "name": "BaseBdev2", 00:18:24.065 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:24.065 "is_configured": true, 00:18:24.065 "data_offset": 0, 00:18:24.065 "data_size": 65536 00:18:24.065 }, 00:18:24.065 { 00:18:24.065 "name": "BaseBdev3", 00:18:24.065 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:24.065 "is_configured": true, 00:18:24.065 "data_offset": 0, 00:18:24.065 "data_size": 65536 00:18:24.065 } 00:18:24.065 ] 00:18:24.065 }' 00:18:24.065 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.065 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.324 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:24.324 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.324 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.325 [2024-11-20 07:15:06.483041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.325 "name": "Existed_Raid", 00:18:24.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.325 "strip_size_kb": 64, 00:18:24.325 "state": "configuring", 00:18:24.325 "raid_level": "raid5f", 00:18:24.325 "superblock": false, 00:18:24.325 "num_base_bdevs": 3, 00:18:24.325 "num_base_bdevs_discovered": 1, 00:18:24.325 "num_base_bdevs_operational": 3, 00:18:24.325 "base_bdevs_list": [ 00:18:24.325 { 00:18:24.325 "name": "BaseBdev1", 00:18:24.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.325 "is_configured": false, 00:18:24.325 "data_offset": 0, 00:18:24.325 "data_size": 0 00:18:24.325 }, 00:18:24.325 { 00:18:24.325 "name": null, 00:18:24.325 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:24.325 "is_configured": false, 00:18:24.325 "data_offset": 0, 00:18:24.325 "data_size": 65536 00:18:24.325 }, 00:18:24.325 { 00:18:24.325 "name": "BaseBdev3", 00:18:24.325 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:24.325 "is_configured": true, 00:18:24.325 "data_offset": 0, 00:18:24.325 "data_size": 65536 00:18:24.325 } 00:18:24.325 ] 00:18:24.325 }' 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.325 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.913 07:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.913 [2024-11-20 07:15:07.016513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.913 BaseBdev1 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.913 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.913 [ 00:18:24.913 { 00:18:24.913 "name": "BaseBdev1", 00:18:24.913 "aliases": [ 00:18:24.913 "4ed20eaf-625e-404a-92e8-cdd17ae83e2f" 00:18:24.913 ], 00:18:24.913 "product_name": "Malloc disk", 00:18:24.913 "block_size": 512, 00:18:24.913 "num_blocks": 65536, 00:18:24.913 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:24.913 "assigned_rate_limits": { 00:18:24.913 "rw_ios_per_sec": 0, 00:18:24.913 "rw_mbytes_per_sec": 0, 00:18:24.913 "r_mbytes_per_sec": 0, 00:18:24.913 "w_mbytes_per_sec": 0 00:18:24.913 }, 00:18:24.913 "claimed": true, 00:18:24.913 "claim_type": "exclusive_write", 00:18:24.913 "zoned": false, 00:18:24.913 "supported_io_types": { 00:18:24.913 "read": true, 00:18:24.913 "write": true, 00:18:24.913 "unmap": true, 00:18:24.913 "flush": true, 00:18:24.913 "reset": true, 00:18:24.914 "nvme_admin": false, 00:18:24.914 "nvme_io": false, 00:18:24.914 "nvme_io_md": false, 00:18:24.914 "write_zeroes": true, 00:18:24.914 "zcopy": true, 00:18:24.914 "get_zone_info": false, 00:18:24.914 "zone_management": false, 00:18:24.914 "zone_append": false, 00:18:24.914 "compare": false, 00:18:24.914 "compare_and_write": false, 00:18:24.914 "abort": true, 00:18:24.914 "seek_hole": false, 00:18:24.914 "seek_data": false, 00:18:24.914 "copy": true, 00:18:24.914 "nvme_iov_md": false 00:18:24.914 }, 00:18:24.914 "memory_domains": [ 00:18:24.914 { 00:18:24.914 "dma_device_id": "system", 00:18:24.914 "dma_device_type": 1 00:18:24.914 }, 00:18:24.914 { 00:18:24.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.914 "dma_device_type": 2 00:18:24.914 } 00:18:24.914 ], 00:18:24.914 "driver_specific": {} 00:18:24.914 } 00:18:24.914 ] 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.914 "name": "Existed_Raid", 00:18:24.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.914 "strip_size_kb": 64, 00:18:24.914 "state": "configuring", 00:18:24.914 "raid_level": "raid5f", 00:18:24.914 "superblock": false, 00:18:24.914 "num_base_bdevs": 3, 00:18:24.914 "num_base_bdevs_discovered": 2, 00:18:24.914 "num_base_bdevs_operational": 3, 00:18:24.914 "base_bdevs_list": [ 00:18:24.914 { 00:18:24.914 "name": "BaseBdev1", 00:18:24.914 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:24.914 "is_configured": true, 00:18:24.914 "data_offset": 0, 00:18:24.914 "data_size": 65536 00:18:24.914 }, 00:18:24.914 { 00:18:24.914 "name": null, 00:18:24.914 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:24.914 "is_configured": false, 00:18:24.914 "data_offset": 0, 00:18:24.914 "data_size": 65536 00:18:24.914 }, 00:18:24.914 { 00:18:24.914 "name": "BaseBdev3", 00:18:24.914 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:24.914 "is_configured": true, 00:18:24.914 "data_offset": 0, 00:18:24.914 "data_size": 65536 00:18:24.914 } 00:18:24.914 ] 00:18:24.914 }' 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.914 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 [2024-11-20 07:15:07.475841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.488 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.488 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.488 "name": "Existed_Raid", 00:18:25.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.488 "strip_size_kb": 64, 00:18:25.488 "state": "configuring", 00:18:25.488 "raid_level": "raid5f", 00:18:25.488 "superblock": false, 00:18:25.488 "num_base_bdevs": 3, 00:18:25.488 "num_base_bdevs_discovered": 1, 00:18:25.488 "num_base_bdevs_operational": 3, 00:18:25.488 "base_bdevs_list": [ 00:18:25.488 { 00:18:25.488 "name": "BaseBdev1", 00:18:25.488 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:25.488 "is_configured": true, 00:18:25.488 "data_offset": 0, 00:18:25.488 "data_size": 65536 00:18:25.488 }, 00:18:25.488 { 00:18:25.488 "name": null, 00:18:25.488 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:25.488 "is_configured": false, 00:18:25.488 "data_offset": 0, 00:18:25.488 "data_size": 65536 00:18:25.488 }, 00:18:25.488 { 00:18:25.488 "name": null, 00:18:25.488 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:25.488 "is_configured": false, 00:18:25.488 "data_offset": 0, 00:18:25.488 "data_size": 65536 00:18:25.488 } 00:18:25.488 ] 00:18:25.488 }' 00:18:25.488 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.488 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.748 [2024-11-20 07:15:07.959051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.748 07:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.008 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.008 "name": "Existed_Raid", 00:18:26.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.008 "strip_size_kb": 64, 00:18:26.008 "state": "configuring", 00:18:26.008 "raid_level": "raid5f", 00:18:26.008 "superblock": false, 00:18:26.008 "num_base_bdevs": 3, 00:18:26.008 "num_base_bdevs_discovered": 2, 00:18:26.008 "num_base_bdevs_operational": 3, 00:18:26.008 "base_bdevs_list": [ 00:18:26.008 { 00:18:26.008 "name": "BaseBdev1", 00:18:26.008 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:26.008 "is_configured": true, 00:18:26.008 "data_offset": 0, 00:18:26.008 "data_size": 65536 00:18:26.008 }, 00:18:26.008 { 00:18:26.008 "name": null, 00:18:26.008 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:26.008 "is_configured": false, 00:18:26.008 "data_offset": 0, 00:18:26.008 "data_size": 65536 00:18:26.008 }, 00:18:26.008 { 00:18:26.008 "name": "BaseBdev3", 00:18:26.008 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:26.008 "is_configured": true, 00:18:26.008 "data_offset": 0, 00:18:26.008 "data_size": 65536 00:18:26.008 } 00:18:26.008 ] 00:18:26.008 }' 00:18:26.008 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.008 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.266 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.266 [2024-11-20 07:15:08.486170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.525 "name": "Existed_Raid", 00:18:26.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.525 "strip_size_kb": 64, 00:18:26.525 "state": "configuring", 00:18:26.525 "raid_level": "raid5f", 00:18:26.525 "superblock": false, 00:18:26.525 "num_base_bdevs": 3, 00:18:26.525 "num_base_bdevs_discovered": 1, 00:18:26.525 "num_base_bdevs_operational": 3, 00:18:26.525 "base_bdevs_list": [ 00:18:26.525 { 00:18:26.525 "name": null, 00:18:26.525 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:26.525 "is_configured": false, 00:18:26.525 "data_offset": 0, 00:18:26.525 "data_size": 65536 00:18:26.525 }, 00:18:26.525 { 00:18:26.525 "name": null, 00:18:26.525 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:26.525 "is_configured": false, 00:18:26.525 "data_offset": 0, 00:18:26.525 "data_size": 65536 00:18:26.525 }, 00:18:26.525 { 00:18:26.525 "name": "BaseBdev3", 00:18:26.525 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:26.525 "is_configured": true, 00:18:26.525 "data_offset": 0, 00:18:26.525 "data_size": 65536 00:18:26.525 } 00:18:26.525 ] 00:18:26.525 }' 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.525 07:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 [2024-11-20 07:15:09.115412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.107 "name": "Existed_Raid", 00:18:27.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.107 "strip_size_kb": 64, 00:18:27.107 "state": "configuring", 00:18:27.107 "raid_level": "raid5f", 00:18:27.107 "superblock": false, 00:18:27.107 "num_base_bdevs": 3, 00:18:27.107 "num_base_bdevs_discovered": 2, 00:18:27.107 "num_base_bdevs_operational": 3, 00:18:27.107 "base_bdevs_list": [ 00:18:27.107 { 00:18:27.107 "name": null, 00:18:27.107 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:27.107 "is_configured": false, 00:18:27.107 "data_offset": 0, 00:18:27.107 "data_size": 65536 00:18:27.107 }, 00:18:27.107 { 00:18:27.107 "name": "BaseBdev2", 00:18:27.107 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:27.107 "is_configured": true, 00:18:27.107 "data_offset": 0, 00:18:27.107 "data_size": 65536 00:18:27.107 }, 00:18:27.107 { 00:18:27.107 "name": "BaseBdev3", 00:18:27.107 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:27.107 "is_configured": true, 00:18:27.107 "data_offset": 0, 00:18:27.107 "data_size": 65536 00:18:27.107 } 00:18:27.107 ] 00:18:27.107 }' 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.107 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.375 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ed20eaf-625e-404a-92e8-cdd17ae83e2f 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.634 [2024-11-20 07:15:09.719091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:27.634 [2024-11-20 07:15:09.719144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:27.634 [2024-11-20 07:15:09.719155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:27.634 [2024-11-20 07:15:09.719460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:27.634 [2024-11-20 07:15:09.725270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:27.634 [2024-11-20 07:15:09.725294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:27.634 [2024-11-20 07:15:09.725643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.634 NewBaseBdev 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.634 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.634 [ 00:18:27.634 { 00:18:27.634 "name": "NewBaseBdev", 00:18:27.634 "aliases": [ 00:18:27.634 "4ed20eaf-625e-404a-92e8-cdd17ae83e2f" 00:18:27.634 ], 00:18:27.634 "product_name": "Malloc disk", 00:18:27.634 "block_size": 512, 00:18:27.634 "num_blocks": 65536, 00:18:27.634 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:27.634 "assigned_rate_limits": { 00:18:27.634 "rw_ios_per_sec": 0, 00:18:27.634 "rw_mbytes_per_sec": 0, 00:18:27.634 "r_mbytes_per_sec": 0, 00:18:27.635 "w_mbytes_per_sec": 0 00:18:27.635 }, 00:18:27.635 "claimed": true, 00:18:27.635 "claim_type": "exclusive_write", 00:18:27.635 "zoned": false, 00:18:27.635 "supported_io_types": { 00:18:27.635 "read": true, 00:18:27.635 "write": true, 00:18:27.635 "unmap": true, 00:18:27.635 "flush": true, 00:18:27.635 "reset": true, 00:18:27.635 "nvme_admin": false, 00:18:27.635 "nvme_io": false, 00:18:27.635 "nvme_io_md": false, 00:18:27.635 "write_zeroes": true, 00:18:27.635 "zcopy": true, 00:18:27.635 "get_zone_info": false, 00:18:27.635 "zone_management": false, 00:18:27.635 "zone_append": false, 00:18:27.635 "compare": false, 00:18:27.635 "compare_and_write": false, 00:18:27.635 "abort": true, 00:18:27.635 "seek_hole": false, 00:18:27.635 "seek_data": false, 00:18:27.635 "copy": true, 00:18:27.635 "nvme_iov_md": false 00:18:27.635 }, 00:18:27.635 "memory_domains": [ 00:18:27.635 { 00:18:27.635 "dma_device_id": "system", 00:18:27.635 "dma_device_type": 1 00:18:27.635 }, 00:18:27.635 { 00:18:27.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.635 "dma_device_type": 2 00:18:27.635 } 00:18:27.635 ], 00:18:27.635 "driver_specific": {} 00:18:27.635 } 00:18:27.635 ] 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.635 "name": "Existed_Raid", 00:18:27.635 "uuid": "41780dd0-aca6-4b22-bcfd-48481ec2b91d", 00:18:27.635 "strip_size_kb": 64, 00:18:27.635 "state": "online", 00:18:27.635 "raid_level": "raid5f", 00:18:27.635 "superblock": false, 00:18:27.635 "num_base_bdevs": 3, 00:18:27.635 "num_base_bdevs_discovered": 3, 00:18:27.635 "num_base_bdevs_operational": 3, 00:18:27.635 "base_bdevs_list": [ 00:18:27.635 { 00:18:27.635 "name": "NewBaseBdev", 00:18:27.635 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:27.635 "is_configured": true, 00:18:27.635 "data_offset": 0, 00:18:27.635 "data_size": 65536 00:18:27.635 }, 00:18:27.635 { 00:18:27.635 "name": "BaseBdev2", 00:18:27.635 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:27.635 "is_configured": true, 00:18:27.635 "data_offset": 0, 00:18:27.635 "data_size": 65536 00:18:27.635 }, 00:18:27.635 { 00:18:27.635 "name": "BaseBdev3", 00:18:27.635 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:27.635 "is_configured": true, 00:18:27.635 "data_offset": 0, 00:18:27.635 "data_size": 65536 00:18:27.635 } 00:18:27.635 ] 00:18:27.635 }' 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.635 07:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.204 [2024-11-20 07:15:10.232986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.204 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:28.204 "name": "Existed_Raid", 00:18:28.204 "aliases": [ 00:18:28.204 "41780dd0-aca6-4b22-bcfd-48481ec2b91d" 00:18:28.204 ], 00:18:28.204 "product_name": "Raid Volume", 00:18:28.204 "block_size": 512, 00:18:28.204 "num_blocks": 131072, 00:18:28.204 "uuid": "41780dd0-aca6-4b22-bcfd-48481ec2b91d", 00:18:28.204 "assigned_rate_limits": { 00:18:28.204 "rw_ios_per_sec": 0, 00:18:28.204 "rw_mbytes_per_sec": 0, 00:18:28.204 "r_mbytes_per_sec": 0, 00:18:28.204 "w_mbytes_per_sec": 0 00:18:28.204 }, 00:18:28.204 "claimed": false, 00:18:28.204 "zoned": false, 00:18:28.204 "supported_io_types": { 00:18:28.204 "read": true, 00:18:28.204 "write": true, 00:18:28.204 "unmap": false, 00:18:28.204 "flush": false, 00:18:28.204 "reset": true, 00:18:28.204 "nvme_admin": false, 00:18:28.204 "nvme_io": false, 00:18:28.204 "nvme_io_md": false, 00:18:28.204 "write_zeroes": true, 00:18:28.204 "zcopy": false, 00:18:28.204 "get_zone_info": false, 00:18:28.204 "zone_management": false, 00:18:28.204 "zone_append": false, 00:18:28.204 "compare": false, 00:18:28.204 "compare_and_write": false, 00:18:28.204 "abort": false, 00:18:28.204 "seek_hole": false, 00:18:28.204 "seek_data": false, 00:18:28.204 "copy": false, 00:18:28.204 "nvme_iov_md": false 00:18:28.204 }, 00:18:28.204 "driver_specific": { 00:18:28.204 "raid": { 00:18:28.204 "uuid": "41780dd0-aca6-4b22-bcfd-48481ec2b91d", 00:18:28.204 "strip_size_kb": 64, 00:18:28.204 "state": "online", 00:18:28.204 "raid_level": "raid5f", 00:18:28.204 "superblock": false, 00:18:28.204 "num_base_bdevs": 3, 00:18:28.204 "num_base_bdevs_discovered": 3, 00:18:28.204 "num_base_bdevs_operational": 3, 00:18:28.204 "base_bdevs_list": [ 00:18:28.204 { 00:18:28.204 "name": "NewBaseBdev", 00:18:28.204 "uuid": "4ed20eaf-625e-404a-92e8-cdd17ae83e2f", 00:18:28.205 "is_configured": true, 00:18:28.205 "data_offset": 0, 00:18:28.205 "data_size": 65536 00:18:28.205 }, 00:18:28.205 { 00:18:28.205 "name": "BaseBdev2", 00:18:28.205 "uuid": "e946b49c-7eea-4f22-943b-02e1ac0874b9", 00:18:28.205 "is_configured": true, 00:18:28.205 "data_offset": 0, 00:18:28.205 "data_size": 65536 00:18:28.205 }, 00:18:28.205 { 00:18:28.205 "name": "BaseBdev3", 00:18:28.205 "uuid": "7f383bec-17b7-4508-812a-d9e121d5d060", 00:18:28.205 "is_configured": true, 00:18:28.205 "data_offset": 0, 00:18:28.205 "data_size": 65536 00:18:28.205 } 00:18:28.205 ] 00:18:28.205 } 00:18:28.205 } 00:18:28.205 }' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:28.205 BaseBdev2 00:18:28.205 BaseBdev3' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.205 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.475 [2024-11-20 07:15:10.536239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.475 [2024-11-20 07:15:10.536353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.475 [2024-11-20 07:15:10.536495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.475 [2024-11-20 07:15:10.536867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.475 [2024-11-20 07:15:10.536940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80351 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80351 ']' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80351 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80351 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.475 killing process with pid 80351 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80351' 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80351 00:18:28.475 [2024-11-20 07:15:10.584517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.475 07:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80351 00:18:28.734 [2024-11-20 07:15:10.906917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.114 07:15:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:30.114 00:18:30.114 real 0m10.859s 00:18:30.114 user 0m17.091s 00:18:30.114 sys 0m1.862s 00:18:30.114 07:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.114 ************************************ 00:18:30.114 END TEST raid5f_state_function_test 00:18:30.114 ************************************ 00:18:30.114 07:15:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.114 07:15:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:30.114 07:15:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:30.114 07:15:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.114 07:15:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.114 ************************************ 00:18:30.114 START TEST raid5f_state_function_test_sb 00:18:30.114 ************************************ 00:18:30.114 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80978 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80978' 00:18:30.115 Process raid pid: 80978 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80978 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80978 ']' 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.115 07:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.115 [2024-11-20 07:15:12.307957] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:30.115 [2024-11-20 07:15:12.308660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.374 [2024-11-20 07:15:12.487296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.374 [2024-11-20 07:15:12.624522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.633 [2024-11-20 07:15:12.867977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.633 [2024-11-20 07:15:12.868029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.232 [2024-11-20 07:15:13.199932] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.232 [2024-11-20 07:15:13.199993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.232 [2024-11-20 07:15:13.200006] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.232 [2024-11-20 07:15:13.200017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.232 [2024-11-20 07:15:13.200025] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.232 [2024-11-20 07:15:13.200036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.232 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.232 "name": "Existed_Raid", 00:18:31.232 "uuid": "f00ef768-8aab-47f3-be52-74a6f844a15d", 00:18:31.232 "strip_size_kb": 64, 00:18:31.232 "state": "configuring", 00:18:31.232 "raid_level": "raid5f", 00:18:31.232 "superblock": true, 00:18:31.232 "num_base_bdevs": 3, 00:18:31.232 "num_base_bdevs_discovered": 0, 00:18:31.232 "num_base_bdevs_operational": 3, 00:18:31.232 "base_bdevs_list": [ 00:18:31.232 { 00:18:31.232 "name": "BaseBdev1", 00:18:31.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.232 "is_configured": false, 00:18:31.232 "data_offset": 0, 00:18:31.232 "data_size": 0 00:18:31.232 }, 00:18:31.232 { 00:18:31.232 "name": "BaseBdev2", 00:18:31.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.232 "is_configured": false, 00:18:31.232 "data_offset": 0, 00:18:31.232 "data_size": 0 00:18:31.232 }, 00:18:31.232 { 00:18:31.232 "name": "BaseBdev3", 00:18:31.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.232 "is_configured": false, 00:18:31.232 "data_offset": 0, 00:18:31.232 "data_size": 0 00:18:31.232 } 00:18:31.232 ] 00:18:31.232 }' 00:18:31.233 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.233 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 [2024-11-20 07:15:13.695091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.493 [2024-11-20 07:15:13.695199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 [2024-11-20 07:15:13.707094] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.493 [2024-11-20 07:15:13.707192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.493 [2024-11-20 07:15:13.707228] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.493 [2024-11-20 07:15:13.707257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.493 [2024-11-20 07:15:13.707279] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.493 [2024-11-20 07:15:13.707305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.493 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 [2024-11-20 07:15:13.755877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.493 BaseBdev1 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.752 [ 00:18:31.752 { 00:18:31.752 "name": "BaseBdev1", 00:18:31.752 "aliases": [ 00:18:31.752 "9b1ac711-a1d6-44be-8b46-4706f8cae3a3" 00:18:31.752 ], 00:18:31.752 "product_name": "Malloc disk", 00:18:31.752 "block_size": 512, 00:18:31.752 "num_blocks": 65536, 00:18:31.752 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:31.752 "assigned_rate_limits": { 00:18:31.752 "rw_ios_per_sec": 0, 00:18:31.752 "rw_mbytes_per_sec": 0, 00:18:31.752 "r_mbytes_per_sec": 0, 00:18:31.752 "w_mbytes_per_sec": 0 00:18:31.752 }, 00:18:31.752 "claimed": true, 00:18:31.752 "claim_type": "exclusive_write", 00:18:31.752 "zoned": false, 00:18:31.752 "supported_io_types": { 00:18:31.752 "read": true, 00:18:31.752 "write": true, 00:18:31.752 "unmap": true, 00:18:31.752 "flush": true, 00:18:31.752 "reset": true, 00:18:31.752 "nvme_admin": false, 00:18:31.752 "nvme_io": false, 00:18:31.752 "nvme_io_md": false, 00:18:31.752 "write_zeroes": true, 00:18:31.752 "zcopy": true, 00:18:31.752 "get_zone_info": false, 00:18:31.752 "zone_management": false, 00:18:31.752 "zone_append": false, 00:18:31.752 "compare": false, 00:18:31.752 "compare_and_write": false, 00:18:31.752 "abort": true, 00:18:31.752 "seek_hole": false, 00:18:31.752 "seek_data": false, 00:18:31.752 "copy": true, 00:18:31.752 "nvme_iov_md": false 00:18:31.752 }, 00:18:31.752 "memory_domains": [ 00:18:31.752 { 00:18:31.752 "dma_device_id": "system", 00:18:31.752 "dma_device_type": 1 00:18:31.752 }, 00:18:31.752 { 00:18:31.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.752 "dma_device_type": 2 00:18:31.752 } 00:18:31.752 ], 00:18:31.752 "driver_specific": {} 00:18:31.752 } 00:18:31.752 ] 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.752 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.753 "name": "Existed_Raid", 00:18:31.753 "uuid": "41968c84-6459-4953-a8b8-b0736e8fdad9", 00:18:31.753 "strip_size_kb": 64, 00:18:31.753 "state": "configuring", 00:18:31.753 "raid_level": "raid5f", 00:18:31.753 "superblock": true, 00:18:31.753 "num_base_bdevs": 3, 00:18:31.753 "num_base_bdevs_discovered": 1, 00:18:31.753 "num_base_bdevs_operational": 3, 00:18:31.753 "base_bdevs_list": [ 00:18:31.753 { 00:18:31.753 "name": "BaseBdev1", 00:18:31.753 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:31.753 "is_configured": true, 00:18:31.753 "data_offset": 2048, 00:18:31.753 "data_size": 63488 00:18:31.753 }, 00:18:31.753 { 00:18:31.753 "name": "BaseBdev2", 00:18:31.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.753 "is_configured": false, 00:18:31.753 "data_offset": 0, 00:18:31.753 "data_size": 0 00:18:31.753 }, 00:18:31.753 { 00:18:31.753 "name": "BaseBdev3", 00:18:31.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.753 "is_configured": false, 00:18:31.753 "data_offset": 0, 00:18:31.753 "data_size": 0 00:18:31.753 } 00:18:31.753 ] 00:18:31.753 }' 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.753 07:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.322 [2024-11-20 07:15:14.291061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.322 [2024-11-20 07:15:14.291204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.322 [2024-11-20 07:15:14.299074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.322 [2024-11-20 07:15:14.301180] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.322 [2024-11-20 07:15:14.301263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.322 [2024-11-20 07:15:14.301299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.322 [2024-11-20 07:15:14.301326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.322 "name": "Existed_Raid", 00:18:32.322 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:32.322 "strip_size_kb": 64, 00:18:32.322 "state": "configuring", 00:18:32.322 "raid_level": "raid5f", 00:18:32.322 "superblock": true, 00:18:32.322 "num_base_bdevs": 3, 00:18:32.322 "num_base_bdevs_discovered": 1, 00:18:32.322 "num_base_bdevs_operational": 3, 00:18:32.322 "base_bdevs_list": [ 00:18:32.322 { 00:18:32.322 "name": "BaseBdev1", 00:18:32.322 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:32.322 "is_configured": true, 00:18:32.322 "data_offset": 2048, 00:18:32.322 "data_size": 63488 00:18:32.322 }, 00:18:32.322 { 00:18:32.322 "name": "BaseBdev2", 00:18:32.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.322 "is_configured": false, 00:18:32.322 "data_offset": 0, 00:18:32.322 "data_size": 0 00:18:32.322 }, 00:18:32.322 { 00:18:32.322 "name": "BaseBdev3", 00:18:32.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.322 "is_configured": false, 00:18:32.322 "data_offset": 0, 00:18:32.322 "data_size": 0 00:18:32.322 } 00:18:32.322 ] 00:18:32.322 }' 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.322 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 [2024-11-20 07:15:14.786050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.582 BaseBdev2 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.582 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 [ 00:18:32.582 { 00:18:32.582 "name": "BaseBdev2", 00:18:32.582 "aliases": [ 00:18:32.582 "62d9c79e-7cc8-4aca-ad36-ec69481c22a0" 00:18:32.582 ], 00:18:32.582 "product_name": "Malloc disk", 00:18:32.582 "block_size": 512, 00:18:32.582 "num_blocks": 65536, 00:18:32.582 "uuid": "62d9c79e-7cc8-4aca-ad36-ec69481c22a0", 00:18:32.582 "assigned_rate_limits": { 00:18:32.582 "rw_ios_per_sec": 0, 00:18:32.582 "rw_mbytes_per_sec": 0, 00:18:32.582 "r_mbytes_per_sec": 0, 00:18:32.582 "w_mbytes_per_sec": 0 00:18:32.582 }, 00:18:32.582 "claimed": true, 00:18:32.582 "claim_type": "exclusive_write", 00:18:32.582 "zoned": false, 00:18:32.582 "supported_io_types": { 00:18:32.582 "read": true, 00:18:32.582 "write": true, 00:18:32.582 "unmap": true, 00:18:32.582 "flush": true, 00:18:32.582 "reset": true, 00:18:32.582 "nvme_admin": false, 00:18:32.582 "nvme_io": false, 00:18:32.582 "nvme_io_md": false, 00:18:32.582 "write_zeroes": true, 00:18:32.582 "zcopy": true, 00:18:32.582 "get_zone_info": false, 00:18:32.582 "zone_management": false, 00:18:32.582 "zone_append": false, 00:18:32.582 "compare": false, 00:18:32.582 "compare_and_write": false, 00:18:32.582 "abort": true, 00:18:32.582 "seek_hole": false, 00:18:32.582 "seek_data": false, 00:18:32.583 "copy": true, 00:18:32.583 "nvme_iov_md": false 00:18:32.583 }, 00:18:32.583 "memory_domains": [ 00:18:32.583 { 00:18:32.583 "dma_device_id": "system", 00:18:32.583 "dma_device_type": 1 00:18:32.583 }, 00:18:32.583 { 00:18:32.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.583 "dma_device_type": 2 00:18:32.583 } 00:18:32.583 ], 00:18:32.583 "driver_specific": {} 00:18:32.583 } 00:18:32.583 ] 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.583 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.842 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.842 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.842 "name": "Existed_Raid", 00:18:32.842 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:32.842 "strip_size_kb": 64, 00:18:32.842 "state": "configuring", 00:18:32.842 "raid_level": "raid5f", 00:18:32.842 "superblock": true, 00:18:32.842 "num_base_bdevs": 3, 00:18:32.842 "num_base_bdevs_discovered": 2, 00:18:32.842 "num_base_bdevs_operational": 3, 00:18:32.842 "base_bdevs_list": [ 00:18:32.842 { 00:18:32.842 "name": "BaseBdev1", 00:18:32.842 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:32.842 "is_configured": true, 00:18:32.842 "data_offset": 2048, 00:18:32.842 "data_size": 63488 00:18:32.842 }, 00:18:32.842 { 00:18:32.842 "name": "BaseBdev2", 00:18:32.842 "uuid": "62d9c79e-7cc8-4aca-ad36-ec69481c22a0", 00:18:32.842 "is_configured": true, 00:18:32.842 "data_offset": 2048, 00:18:32.842 "data_size": 63488 00:18:32.842 }, 00:18:32.842 { 00:18:32.842 "name": "BaseBdev3", 00:18:32.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.842 "is_configured": false, 00:18:32.842 "data_offset": 0, 00:18:32.842 "data_size": 0 00:18:32.842 } 00:18:32.842 ] 00:18:32.842 }' 00:18:32.842 07:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.842 07:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.101 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.101 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.101 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.360 [2024-11-20 07:15:15.405459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.360 [2024-11-20 07:15:15.405840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:33.360 [2024-11-20 07:15:15.405905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:33.360 [2024-11-20 07:15:15.406209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:33.360 BaseBdev3 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.360 [2024-11-20 07:15:15.411969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:33.360 [2024-11-20 07:15:15.412022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:33.360 [2024-11-20 07:15:15.412234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.360 [ 00:18:33.360 { 00:18:33.360 "name": "BaseBdev3", 00:18:33.360 "aliases": [ 00:18:33.360 "9936c8ff-238c-4845-83b3-65c286125e0b" 00:18:33.360 ], 00:18:33.360 "product_name": "Malloc disk", 00:18:33.360 "block_size": 512, 00:18:33.360 "num_blocks": 65536, 00:18:33.360 "uuid": "9936c8ff-238c-4845-83b3-65c286125e0b", 00:18:33.360 "assigned_rate_limits": { 00:18:33.360 "rw_ios_per_sec": 0, 00:18:33.360 "rw_mbytes_per_sec": 0, 00:18:33.360 "r_mbytes_per_sec": 0, 00:18:33.360 "w_mbytes_per_sec": 0 00:18:33.360 }, 00:18:33.360 "claimed": true, 00:18:33.360 "claim_type": "exclusive_write", 00:18:33.360 "zoned": false, 00:18:33.360 "supported_io_types": { 00:18:33.360 "read": true, 00:18:33.360 "write": true, 00:18:33.360 "unmap": true, 00:18:33.360 "flush": true, 00:18:33.360 "reset": true, 00:18:33.360 "nvme_admin": false, 00:18:33.360 "nvme_io": false, 00:18:33.360 "nvme_io_md": false, 00:18:33.360 "write_zeroes": true, 00:18:33.360 "zcopy": true, 00:18:33.360 "get_zone_info": false, 00:18:33.360 "zone_management": false, 00:18:33.360 "zone_append": false, 00:18:33.360 "compare": false, 00:18:33.360 "compare_and_write": false, 00:18:33.360 "abort": true, 00:18:33.360 "seek_hole": false, 00:18:33.360 "seek_data": false, 00:18:33.360 "copy": true, 00:18:33.360 "nvme_iov_md": false 00:18:33.360 }, 00:18:33.360 "memory_domains": [ 00:18:33.360 { 00:18:33.360 "dma_device_id": "system", 00:18:33.360 "dma_device_type": 1 00:18:33.360 }, 00:18:33.360 { 00:18:33.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.360 "dma_device_type": 2 00:18:33.360 } 00:18:33.360 ], 00:18:33.360 "driver_specific": {} 00:18:33.360 } 00:18:33.360 ] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.360 "name": "Existed_Raid", 00:18:33.360 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:33.360 "strip_size_kb": 64, 00:18:33.360 "state": "online", 00:18:33.360 "raid_level": "raid5f", 00:18:33.360 "superblock": true, 00:18:33.360 "num_base_bdevs": 3, 00:18:33.360 "num_base_bdevs_discovered": 3, 00:18:33.360 "num_base_bdevs_operational": 3, 00:18:33.360 "base_bdevs_list": [ 00:18:33.360 { 00:18:33.360 "name": "BaseBdev1", 00:18:33.360 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:33.360 "is_configured": true, 00:18:33.360 "data_offset": 2048, 00:18:33.360 "data_size": 63488 00:18:33.360 }, 00:18:33.360 { 00:18:33.360 "name": "BaseBdev2", 00:18:33.360 "uuid": "62d9c79e-7cc8-4aca-ad36-ec69481c22a0", 00:18:33.360 "is_configured": true, 00:18:33.360 "data_offset": 2048, 00:18:33.360 "data_size": 63488 00:18:33.360 }, 00:18:33.360 { 00:18:33.360 "name": "BaseBdev3", 00:18:33.360 "uuid": "9936c8ff-238c-4845-83b3-65c286125e0b", 00:18:33.360 "is_configured": true, 00:18:33.360 "data_offset": 2048, 00:18:33.360 "data_size": 63488 00:18:33.360 } 00:18:33.360 ] 00:18:33.360 }' 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.360 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.929 [2024-11-20 07:15:15.942273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.929 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:33.929 "name": "Existed_Raid", 00:18:33.929 "aliases": [ 00:18:33.929 "d1aea2ed-b1c3-460f-8585-27943b2899ee" 00:18:33.929 ], 00:18:33.929 "product_name": "Raid Volume", 00:18:33.929 "block_size": 512, 00:18:33.929 "num_blocks": 126976, 00:18:33.929 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:33.929 "assigned_rate_limits": { 00:18:33.929 "rw_ios_per_sec": 0, 00:18:33.929 "rw_mbytes_per_sec": 0, 00:18:33.929 "r_mbytes_per_sec": 0, 00:18:33.929 "w_mbytes_per_sec": 0 00:18:33.929 }, 00:18:33.929 "claimed": false, 00:18:33.929 "zoned": false, 00:18:33.929 "supported_io_types": { 00:18:33.929 "read": true, 00:18:33.929 "write": true, 00:18:33.929 "unmap": false, 00:18:33.929 "flush": false, 00:18:33.929 "reset": true, 00:18:33.929 "nvme_admin": false, 00:18:33.930 "nvme_io": false, 00:18:33.930 "nvme_io_md": false, 00:18:33.930 "write_zeroes": true, 00:18:33.930 "zcopy": false, 00:18:33.930 "get_zone_info": false, 00:18:33.930 "zone_management": false, 00:18:33.930 "zone_append": false, 00:18:33.930 "compare": false, 00:18:33.930 "compare_and_write": false, 00:18:33.930 "abort": false, 00:18:33.930 "seek_hole": false, 00:18:33.930 "seek_data": false, 00:18:33.930 "copy": false, 00:18:33.930 "nvme_iov_md": false 00:18:33.930 }, 00:18:33.930 "driver_specific": { 00:18:33.930 "raid": { 00:18:33.930 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:33.930 "strip_size_kb": 64, 00:18:33.930 "state": "online", 00:18:33.930 "raid_level": "raid5f", 00:18:33.930 "superblock": true, 00:18:33.930 "num_base_bdevs": 3, 00:18:33.930 "num_base_bdevs_discovered": 3, 00:18:33.930 "num_base_bdevs_operational": 3, 00:18:33.930 "base_bdevs_list": [ 00:18:33.930 { 00:18:33.930 "name": "BaseBdev1", 00:18:33.930 "uuid": "9b1ac711-a1d6-44be-8b46-4706f8cae3a3", 00:18:33.930 "is_configured": true, 00:18:33.930 "data_offset": 2048, 00:18:33.930 "data_size": 63488 00:18:33.930 }, 00:18:33.930 { 00:18:33.930 "name": "BaseBdev2", 00:18:33.930 "uuid": "62d9c79e-7cc8-4aca-ad36-ec69481c22a0", 00:18:33.930 "is_configured": true, 00:18:33.930 "data_offset": 2048, 00:18:33.930 "data_size": 63488 00:18:33.930 }, 00:18:33.930 { 00:18:33.930 "name": "BaseBdev3", 00:18:33.930 "uuid": "9936c8ff-238c-4845-83b3-65c286125e0b", 00:18:33.930 "is_configured": true, 00:18:33.930 "data_offset": 2048, 00:18:33.930 "data_size": 63488 00:18:33.930 } 00:18:33.930 ] 00:18:33.930 } 00:18:33.930 } 00:18:33.930 }' 00:18:33.930 07:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:33.930 BaseBdev2 00:18:33.930 BaseBdev3' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.930 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.189 [2024-11-20 07:15:16.225596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.189 "name": "Existed_Raid", 00:18:34.189 "uuid": "d1aea2ed-b1c3-460f-8585-27943b2899ee", 00:18:34.189 "strip_size_kb": 64, 00:18:34.189 "state": "online", 00:18:34.189 "raid_level": "raid5f", 00:18:34.189 "superblock": true, 00:18:34.189 "num_base_bdevs": 3, 00:18:34.189 "num_base_bdevs_discovered": 2, 00:18:34.189 "num_base_bdevs_operational": 2, 00:18:34.189 "base_bdevs_list": [ 00:18:34.189 { 00:18:34.189 "name": null, 00:18:34.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.189 "is_configured": false, 00:18:34.189 "data_offset": 0, 00:18:34.189 "data_size": 63488 00:18:34.189 }, 00:18:34.189 { 00:18:34.189 "name": "BaseBdev2", 00:18:34.189 "uuid": "62d9c79e-7cc8-4aca-ad36-ec69481c22a0", 00:18:34.189 "is_configured": true, 00:18:34.189 "data_offset": 2048, 00:18:34.189 "data_size": 63488 00:18:34.189 }, 00:18:34.189 { 00:18:34.189 "name": "BaseBdev3", 00:18:34.189 "uuid": "9936c8ff-238c-4845-83b3-65c286125e0b", 00:18:34.189 "is_configured": true, 00:18:34.189 "data_offset": 2048, 00:18:34.189 "data_size": 63488 00:18:34.189 } 00:18:34.189 ] 00:18:34.189 }' 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.189 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.758 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:34.758 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.759 [2024-11-20 07:15:16.821806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.759 [2024-11-20 07:15:16.821967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.759 [2024-11-20 07:15:16.921866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.759 07:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.759 [2024-11-20 07:15:16.981803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:34.759 [2024-11-20 07:15:16.981860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.018 BaseBdev2 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.018 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.018 [ 00:18:35.018 { 00:18:35.018 "name": "BaseBdev2", 00:18:35.018 "aliases": [ 00:18:35.018 "90a5ca80-6aa8-477a-97bb-a1bb875742d8" 00:18:35.018 ], 00:18:35.018 "product_name": "Malloc disk", 00:18:35.018 "block_size": 512, 00:18:35.018 "num_blocks": 65536, 00:18:35.018 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:35.018 "assigned_rate_limits": { 00:18:35.018 "rw_ios_per_sec": 0, 00:18:35.018 "rw_mbytes_per_sec": 0, 00:18:35.018 "r_mbytes_per_sec": 0, 00:18:35.018 "w_mbytes_per_sec": 0 00:18:35.018 }, 00:18:35.018 "claimed": false, 00:18:35.018 "zoned": false, 00:18:35.018 "supported_io_types": { 00:18:35.018 "read": true, 00:18:35.018 "write": true, 00:18:35.018 "unmap": true, 00:18:35.018 "flush": true, 00:18:35.018 "reset": true, 00:18:35.018 "nvme_admin": false, 00:18:35.018 "nvme_io": false, 00:18:35.018 "nvme_io_md": false, 00:18:35.018 "write_zeroes": true, 00:18:35.018 "zcopy": true, 00:18:35.018 "get_zone_info": false, 00:18:35.018 "zone_management": false, 00:18:35.018 "zone_append": false, 00:18:35.018 "compare": false, 00:18:35.018 "compare_and_write": false, 00:18:35.018 "abort": true, 00:18:35.018 "seek_hole": false, 00:18:35.018 "seek_data": false, 00:18:35.018 "copy": true, 00:18:35.018 "nvme_iov_md": false 00:18:35.018 }, 00:18:35.018 "memory_domains": [ 00:18:35.018 { 00:18:35.018 "dma_device_id": "system", 00:18:35.018 "dma_device_type": 1 00:18:35.018 }, 00:18:35.018 { 00:18:35.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.019 "dma_device_type": 2 00:18:35.019 } 00:18:35.019 ], 00:18:35.019 "driver_specific": {} 00:18:35.019 } 00:18:35.019 ] 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.019 BaseBdev3 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.019 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.019 [ 00:18:35.019 { 00:18:35.019 "name": "BaseBdev3", 00:18:35.019 "aliases": [ 00:18:35.019 "a5361912-31e1-48a2-a67b-fff9cff09e04" 00:18:35.019 ], 00:18:35.019 "product_name": "Malloc disk", 00:18:35.019 "block_size": 512, 00:18:35.019 "num_blocks": 65536, 00:18:35.019 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:35.019 "assigned_rate_limits": { 00:18:35.019 "rw_ios_per_sec": 0, 00:18:35.019 "rw_mbytes_per_sec": 0, 00:18:35.019 "r_mbytes_per_sec": 0, 00:18:35.019 "w_mbytes_per_sec": 0 00:18:35.019 }, 00:18:35.019 "claimed": false, 00:18:35.019 "zoned": false, 00:18:35.019 "supported_io_types": { 00:18:35.019 "read": true, 00:18:35.019 "write": true, 00:18:35.019 "unmap": true, 00:18:35.019 "flush": true, 00:18:35.019 "reset": true, 00:18:35.019 "nvme_admin": false, 00:18:35.019 "nvme_io": false, 00:18:35.019 "nvme_io_md": false, 00:18:35.019 "write_zeroes": true, 00:18:35.019 "zcopy": true, 00:18:35.019 "get_zone_info": false, 00:18:35.019 "zone_management": false, 00:18:35.278 "zone_append": false, 00:18:35.278 "compare": false, 00:18:35.278 "compare_and_write": false, 00:18:35.278 "abort": true, 00:18:35.278 "seek_hole": false, 00:18:35.278 "seek_data": false, 00:18:35.278 "copy": true, 00:18:35.278 "nvme_iov_md": false 00:18:35.278 }, 00:18:35.278 "memory_domains": [ 00:18:35.278 { 00:18:35.278 "dma_device_id": "system", 00:18:35.278 "dma_device_type": 1 00:18:35.278 }, 00:18:35.278 { 00:18:35.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.278 "dma_device_type": 2 00:18:35.278 } 00:18:35.278 ], 00:18:35.278 "driver_specific": {} 00:18:35.278 } 00:18:35.278 ] 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.278 [2024-11-20 07:15:17.296580] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:35.278 [2024-11-20 07:15:17.296681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:35.278 [2024-11-20 07:15:17.296717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.278 [2024-11-20 07:15:17.298743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.278 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.279 "name": "Existed_Raid", 00:18:35.279 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:35.279 "strip_size_kb": 64, 00:18:35.279 "state": "configuring", 00:18:35.279 "raid_level": "raid5f", 00:18:35.279 "superblock": true, 00:18:35.279 "num_base_bdevs": 3, 00:18:35.279 "num_base_bdevs_discovered": 2, 00:18:35.279 "num_base_bdevs_operational": 3, 00:18:35.279 "base_bdevs_list": [ 00:18:35.279 { 00:18:35.279 "name": "BaseBdev1", 00:18:35.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.279 "is_configured": false, 00:18:35.279 "data_offset": 0, 00:18:35.279 "data_size": 0 00:18:35.279 }, 00:18:35.279 { 00:18:35.279 "name": "BaseBdev2", 00:18:35.279 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:35.279 "is_configured": true, 00:18:35.279 "data_offset": 2048, 00:18:35.279 "data_size": 63488 00:18:35.279 }, 00:18:35.279 { 00:18:35.279 "name": "BaseBdev3", 00:18:35.279 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:35.279 "is_configured": true, 00:18:35.279 "data_offset": 2048, 00:18:35.279 "data_size": 63488 00:18:35.279 } 00:18:35.279 ] 00:18:35.279 }' 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.279 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.537 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:35.537 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.537 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.537 [2024-11-20 07:15:17.795671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.797 "name": "Existed_Raid", 00:18:35.797 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:35.797 "strip_size_kb": 64, 00:18:35.797 "state": "configuring", 00:18:35.797 "raid_level": "raid5f", 00:18:35.797 "superblock": true, 00:18:35.797 "num_base_bdevs": 3, 00:18:35.797 "num_base_bdevs_discovered": 1, 00:18:35.797 "num_base_bdevs_operational": 3, 00:18:35.797 "base_bdevs_list": [ 00:18:35.797 { 00:18:35.797 "name": "BaseBdev1", 00:18:35.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.797 "is_configured": false, 00:18:35.797 "data_offset": 0, 00:18:35.797 "data_size": 0 00:18:35.797 }, 00:18:35.797 { 00:18:35.797 "name": null, 00:18:35.797 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:35.797 "is_configured": false, 00:18:35.797 "data_offset": 0, 00:18:35.797 "data_size": 63488 00:18:35.797 }, 00:18:35.797 { 00:18:35.797 "name": "BaseBdev3", 00:18:35.797 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:35.797 "is_configured": true, 00:18:35.797 "data_offset": 2048, 00:18:35.797 "data_size": 63488 00:18:35.797 } 00:18:35.797 ] 00:18:35.797 }' 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.797 07:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.056 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.315 [2024-11-20 07:15:18.359908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.315 BaseBdev1 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.315 [ 00:18:36.315 { 00:18:36.315 "name": "BaseBdev1", 00:18:36.315 "aliases": [ 00:18:36.315 "7016c802-051f-4147-bfe8-44550cc4aa5f" 00:18:36.315 ], 00:18:36.315 "product_name": "Malloc disk", 00:18:36.315 "block_size": 512, 00:18:36.315 "num_blocks": 65536, 00:18:36.315 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:36.315 "assigned_rate_limits": { 00:18:36.315 "rw_ios_per_sec": 0, 00:18:36.315 "rw_mbytes_per_sec": 0, 00:18:36.315 "r_mbytes_per_sec": 0, 00:18:36.315 "w_mbytes_per_sec": 0 00:18:36.315 }, 00:18:36.315 "claimed": true, 00:18:36.315 "claim_type": "exclusive_write", 00:18:36.315 "zoned": false, 00:18:36.315 "supported_io_types": { 00:18:36.315 "read": true, 00:18:36.315 "write": true, 00:18:36.315 "unmap": true, 00:18:36.315 "flush": true, 00:18:36.315 "reset": true, 00:18:36.315 "nvme_admin": false, 00:18:36.315 "nvme_io": false, 00:18:36.315 "nvme_io_md": false, 00:18:36.315 "write_zeroes": true, 00:18:36.315 "zcopy": true, 00:18:36.315 "get_zone_info": false, 00:18:36.315 "zone_management": false, 00:18:36.315 "zone_append": false, 00:18:36.315 "compare": false, 00:18:36.315 "compare_and_write": false, 00:18:36.315 "abort": true, 00:18:36.315 "seek_hole": false, 00:18:36.315 "seek_data": false, 00:18:36.315 "copy": true, 00:18:36.315 "nvme_iov_md": false 00:18:36.315 }, 00:18:36.315 "memory_domains": [ 00:18:36.315 { 00:18:36.315 "dma_device_id": "system", 00:18:36.315 "dma_device_type": 1 00:18:36.315 }, 00:18:36.315 { 00:18:36.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.315 "dma_device_type": 2 00:18:36.315 } 00:18:36.315 ], 00:18:36.315 "driver_specific": {} 00:18:36.315 } 00:18:36.315 ] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.315 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.315 "name": "Existed_Raid", 00:18:36.315 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:36.315 "strip_size_kb": 64, 00:18:36.315 "state": "configuring", 00:18:36.315 "raid_level": "raid5f", 00:18:36.315 "superblock": true, 00:18:36.316 "num_base_bdevs": 3, 00:18:36.316 "num_base_bdevs_discovered": 2, 00:18:36.316 "num_base_bdevs_operational": 3, 00:18:36.316 "base_bdevs_list": [ 00:18:36.316 { 00:18:36.316 "name": "BaseBdev1", 00:18:36.316 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:36.316 "is_configured": true, 00:18:36.316 "data_offset": 2048, 00:18:36.316 "data_size": 63488 00:18:36.316 }, 00:18:36.316 { 00:18:36.316 "name": null, 00:18:36.316 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:36.316 "is_configured": false, 00:18:36.316 "data_offset": 0, 00:18:36.316 "data_size": 63488 00:18:36.316 }, 00:18:36.316 { 00:18:36.316 "name": "BaseBdev3", 00:18:36.316 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:36.316 "is_configured": true, 00:18:36.316 "data_offset": 2048, 00:18:36.316 "data_size": 63488 00:18:36.316 } 00:18:36.316 ] 00:18:36.316 }' 00:18:36.316 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.316 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.882 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.883 [2024-11-20 07:15:18.931040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.883 "name": "Existed_Raid", 00:18:36.883 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:36.883 "strip_size_kb": 64, 00:18:36.883 "state": "configuring", 00:18:36.883 "raid_level": "raid5f", 00:18:36.883 "superblock": true, 00:18:36.883 "num_base_bdevs": 3, 00:18:36.883 "num_base_bdevs_discovered": 1, 00:18:36.883 "num_base_bdevs_operational": 3, 00:18:36.883 "base_bdevs_list": [ 00:18:36.883 { 00:18:36.883 "name": "BaseBdev1", 00:18:36.883 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:36.883 "is_configured": true, 00:18:36.883 "data_offset": 2048, 00:18:36.883 "data_size": 63488 00:18:36.883 }, 00:18:36.883 { 00:18:36.883 "name": null, 00:18:36.883 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:36.883 "is_configured": false, 00:18:36.883 "data_offset": 0, 00:18:36.883 "data_size": 63488 00:18:36.883 }, 00:18:36.883 { 00:18:36.883 "name": null, 00:18:36.883 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:36.883 "is_configured": false, 00:18:36.883 "data_offset": 0, 00:18:36.883 "data_size": 63488 00:18:36.883 } 00:18:36.883 ] 00:18:36.883 }' 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.883 07:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.142 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:37.142 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.142 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.142 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.401 [2024-11-20 07:15:19.426243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.401 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.402 "name": "Existed_Raid", 00:18:37.402 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:37.402 "strip_size_kb": 64, 00:18:37.402 "state": "configuring", 00:18:37.402 "raid_level": "raid5f", 00:18:37.402 "superblock": true, 00:18:37.402 "num_base_bdevs": 3, 00:18:37.402 "num_base_bdevs_discovered": 2, 00:18:37.402 "num_base_bdevs_operational": 3, 00:18:37.402 "base_bdevs_list": [ 00:18:37.402 { 00:18:37.402 "name": "BaseBdev1", 00:18:37.402 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:37.402 "is_configured": true, 00:18:37.402 "data_offset": 2048, 00:18:37.402 "data_size": 63488 00:18:37.402 }, 00:18:37.402 { 00:18:37.402 "name": null, 00:18:37.402 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:37.402 "is_configured": false, 00:18:37.402 "data_offset": 0, 00:18:37.402 "data_size": 63488 00:18:37.402 }, 00:18:37.402 { 00:18:37.402 "name": "BaseBdev3", 00:18:37.402 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:37.402 "is_configured": true, 00:18:37.402 "data_offset": 2048, 00:18:37.402 "data_size": 63488 00:18:37.402 } 00:18:37.402 ] 00:18:37.402 }' 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.402 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.969 07:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.969 [2024-11-20 07:15:19.993298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.969 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.970 "name": "Existed_Raid", 00:18:37.970 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:37.970 "strip_size_kb": 64, 00:18:37.970 "state": "configuring", 00:18:37.970 "raid_level": "raid5f", 00:18:37.970 "superblock": true, 00:18:37.970 "num_base_bdevs": 3, 00:18:37.970 "num_base_bdevs_discovered": 1, 00:18:37.970 "num_base_bdevs_operational": 3, 00:18:37.970 "base_bdevs_list": [ 00:18:37.970 { 00:18:37.970 "name": null, 00:18:37.970 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:37.970 "is_configured": false, 00:18:37.970 "data_offset": 0, 00:18:37.970 "data_size": 63488 00:18:37.970 }, 00:18:37.970 { 00:18:37.970 "name": null, 00:18:37.970 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:37.970 "is_configured": false, 00:18:37.970 "data_offset": 0, 00:18:37.970 "data_size": 63488 00:18:37.970 }, 00:18:37.970 { 00:18:37.970 "name": "BaseBdev3", 00:18:37.970 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:37.970 "is_configured": true, 00:18:37.970 "data_offset": 2048, 00:18:37.970 "data_size": 63488 00:18:37.970 } 00:18:37.970 ] 00:18:37.970 }' 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.970 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.537 [2024-11-20 07:15:20.674581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.537 "name": "Existed_Raid", 00:18:38.537 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:38.537 "strip_size_kb": 64, 00:18:38.537 "state": "configuring", 00:18:38.537 "raid_level": "raid5f", 00:18:38.537 "superblock": true, 00:18:38.537 "num_base_bdevs": 3, 00:18:38.537 "num_base_bdevs_discovered": 2, 00:18:38.537 "num_base_bdevs_operational": 3, 00:18:38.537 "base_bdevs_list": [ 00:18:38.537 { 00:18:38.537 "name": null, 00:18:38.537 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:38.537 "is_configured": false, 00:18:38.537 "data_offset": 0, 00:18:38.537 "data_size": 63488 00:18:38.537 }, 00:18:38.537 { 00:18:38.537 "name": "BaseBdev2", 00:18:38.537 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:38.537 "is_configured": true, 00:18:38.537 "data_offset": 2048, 00:18:38.537 "data_size": 63488 00:18:38.537 }, 00:18:38.537 { 00:18:38.537 "name": "BaseBdev3", 00:18:38.537 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:38.537 "is_configured": true, 00:18:38.537 "data_offset": 2048, 00:18:38.537 "data_size": 63488 00:18:38.537 } 00:18:38.537 ] 00:18:38.537 }' 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.537 07:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7016c802-051f-4147-bfe8-44550cc4aa5f 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 [2024-11-20 07:15:21.267575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:39.126 [2024-11-20 07:15:21.267893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:39.126 [2024-11-20 07:15:21.267951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:39.126 [2024-11-20 07:15:21.268244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:39.126 NewBaseBdev 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 [2024-11-20 07:15:21.275001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:39.126 [2024-11-20 07:15:21.275026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:39.126 [2024-11-20 07:15:21.275222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.126 [ 00:18:39.126 { 00:18:39.126 "name": "NewBaseBdev", 00:18:39.126 "aliases": [ 00:18:39.126 "7016c802-051f-4147-bfe8-44550cc4aa5f" 00:18:39.126 ], 00:18:39.126 "product_name": "Malloc disk", 00:18:39.126 "block_size": 512, 00:18:39.126 "num_blocks": 65536, 00:18:39.126 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:39.126 "assigned_rate_limits": { 00:18:39.126 "rw_ios_per_sec": 0, 00:18:39.126 "rw_mbytes_per_sec": 0, 00:18:39.126 "r_mbytes_per_sec": 0, 00:18:39.126 "w_mbytes_per_sec": 0 00:18:39.126 }, 00:18:39.126 "claimed": true, 00:18:39.126 "claim_type": "exclusive_write", 00:18:39.126 "zoned": false, 00:18:39.126 "supported_io_types": { 00:18:39.126 "read": true, 00:18:39.126 "write": true, 00:18:39.126 "unmap": true, 00:18:39.126 "flush": true, 00:18:39.126 "reset": true, 00:18:39.126 "nvme_admin": false, 00:18:39.126 "nvme_io": false, 00:18:39.126 "nvme_io_md": false, 00:18:39.126 "write_zeroes": true, 00:18:39.126 "zcopy": true, 00:18:39.126 "get_zone_info": false, 00:18:39.126 "zone_management": false, 00:18:39.126 "zone_append": false, 00:18:39.126 "compare": false, 00:18:39.126 "compare_and_write": false, 00:18:39.126 "abort": true, 00:18:39.126 "seek_hole": false, 00:18:39.126 "seek_data": false, 00:18:39.126 "copy": true, 00:18:39.126 "nvme_iov_md": false 00:18:39.126 }, 00:18:39.126 "memory_domains": [ 00:18:39.126 { 00:18:39.126 "dma_device_id": "system", 00:18:39.126 "dma_device_type": 1 00:18:39.126 }, 00:18:39.126 { 00:18:39.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.126 "dma_device_type": 2 00:18:39.126 } 00:18:39.126 ], 00:18:39.126 "driver_specific": {} 00:18:39.126 } 00:18:39.126 ] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.126 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.127 "name": "Existed_Raid", 00:18:39.127 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:39.127 "strip_size_kb": 64, 00:18:39.127 "state": "online", 00:18:39.127 "raid_level": "raid5f", 00:18:39.127 "superblock": true, 00:18:39.127 "num_base_bdevs": 3, 00:18:39.127 "num_base_bdevs_discovered": 3, 00:18:39.127 "num_base_bdevs_operational": 3, 00:18:39.127 "base_bdevs_list": [ 00:18:39.127 { 00:18:39.127 "name": "NewBaseBdev", 00:18:39.127 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:39.127 "is_configured": true, 00:18:39.127 "data_offset": 2048, 00:18:39.127 "data_size": 63488 00:18:39.127 }, 00:18:39.127 { 00:18:39.127 "name": "BaseBdev2", 00:18:39.127 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:39.127 "is_configured": true, 00:18:39.127 "data_offset": 2048, 00:18:39.127 "data_size": 63488 00:18:39.127 }, 00:18:39.127 { 00:18:39.127 "name": "BaseBdev3", 00:18:39.127 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:39.127 "is_configured": true, 00:18:39.127 "data_offset": 2048, 00:18:39.127 "data_size": 63488 00:18:39.127 } 00:18:39.127 ] 00:18:39.127 }' 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.127 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.701 [2024-11-20 07:15:21.806251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.701 "name": "Existed_Raid", 00:18:39.701 "aliases": [ 00:18:39.701 "42256c04-b37e-410b-8714-55bdf9e65a9c" 00:18:39.701 ], 00:18:39.701 "product_name": "Raid Volume", 00:18:39.701 "block_size": 512, 00:18:39.701 "num_blocks": 126976, 00:18:39.701 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:39.701 "assigned_rate_limits": { 00:18:39.701 "rw_ios_per_sec": 0, 00:18:39.701 "rw_mbytes_per_sec": 0, 00:18:39.701 "r_mbytes_per_sec": 0, 00:18:39.701 "w_mbytes_per_sec": 0 00:18:39.701 }, 00:18:39.701 "claimed": false, 00:18:39.701 "zoned": false, 00:18:39.701 "supported_io_types": { 00:18:39.701 "read": true, 00:18:39.701 "write": true, 00:18:39.701 "unmap": false, 00:18:39.701 "flush": false, 00:18:39.701 "reset": true, 00:18:39.701 "nvme_admin": false, 00:18:39.701 "nvme_io": false, 00:18:39.701 "nvme_io_md": false, 00:18:39.701 "write_zeroes": true, 00:18:39.701 "zcopy": false, 00:18:39.701 "get_zone_info": false, 00:18:39.701 "zone_management": false, 00:18:39.701 "zone_append": false, 00:18:39.701 "compare": false, 00:18:39.701 "compare_and_write": false, 00:18:39.701 "abort": false, 00:18:39.701 "seek_hole": false, 00:18:39.701 "seek_data": false, 00:18:39.701 "copy": false, 00:18:39.701 "nvme_iov_md": false 00:18:39.701 }, 00:18:39.701 "driver_specific": { 00:18:39.701 "raid": { 00:18:39.701 "uuid": "42256c04-b37e-410b-8714-55bdf9e65a9c", 00:18:39.701 "strip_size_kb": 64, 00:18:39.701 "state": "online", 00:18:39.701 "raid_level": "raid5f", 00:18:39.701 "superblock": true, 00:18:39.701 "num_base_bdevs": 3, 00:18:39.701 "num_base_bdevs_discovered": 3, 00:18:39.701 "num_base_bdevs_operational": 3, 00:18:39.701 "base_bdevs_list": [ 00:18:39.701 { 00:18:39.701 "name": "NewBaseBdev", 00:18:39.701 "uuid": "7016c802-051f-4147-bfe8-44550cc4aa5f", 00:18:39.701 "is_configured": true, 00:18:39.701 "data_offset": 2048, 00:18:39.701 "data_size": 63488 00:18:39.701 }, 00:18:39.701 { 00:18:39.701 "name": "BaseBdev2", 00:18:39.701 "uuid": "90a5ca80-6aa8-477a-97bb-a1bb875742d8", 00:18:39.701 "is_configured": true, 00:18:39.701 "data_offset": 2048, 00:18:39.701 "data_size": 63488 00:18:39.701 }, 00:18:39.701 { 00:18:39.701 "name": "BaseBdev3", 00:18:39.701 "uuid": "a5361912-31e1-48a2-a67b-fff9cff09e04", 00:18:39.701 "is_configured": true, 00:18:39.701 "data_offset": 2048, 00:18:39.701 "data_size": 63488 00:18:39.701 } 00:18:39.701 ] 00:18:39.701 } 00:18:39.701 } 00:18:39.701 }' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:39.701 BaseBdev2 00:18:39.701 BaseBdev3' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.701 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 07:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 [2024-11-20 07:15:22.105521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.960 [2024-11-20 07:15:22.105565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.960 [2024-11-20 07:15:22.105662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.960 [2024-11-20 07:15:22.105994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.960 [2024-11-20 07:15:22.106010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80978 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80978 ']' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80978 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80978 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.960 killing process with pid 80978 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80978' 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80978 00:18:39.960 [2024-11-20 07:15:22.154110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.960 07:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80978 00:18:40.526 [2024-11-20 07:15:22.501081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:41.904 07:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:41.904 00:18:41.904 real 0m11.561s 00:18:41.904 user 0m18.293s 00:18:41.904 sys 0m2.081s 00:18:41.904 ************************************ 00:18:41.904 END TEST raid5f_state_function_test_sb 00:18:41.904 ************************************ 00:18:41.904 07:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.904 07:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.904 07:15:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:41.904 07:15:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:41.904 07:15:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.904 07:15:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.904 ************************************ 00:18:41.904 START TEST raid5f_superblock_test 00:18:41.904 ************************************ 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81605 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81605 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81605 ']' 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.904 07:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.904 [2024-11-20 07:15:23.935106] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:41.904 [2024-11-20 07:15:23.935360] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81605 ] 00:18:41.904 [2024-11-20 07:15:24.114147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.188 [2024-11-20 07:15:24.235105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.448 [2024-11-20 07:15:24.471219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:42.448 [2024-11-20 07:15:24.471330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.708 malloc1 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.708 [2024-11-20 07:15:24.876032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:42.708 [2024-11-20 07:15:24.876141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.708 [2024-11-20 07:15:24.876182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:42.708 [2024-11-20 07:15:24.876235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.708 [2024-11-20 07:15:24.878566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.708 [2024-11-20 07:15:24.878656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:42.708 pt1 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.708 malloc2 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.708 [2024-11-20 07:15:24.938533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.708 [2024-11-20 07:15:24.938598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.708 [2024-11-20 07:15:24.938622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:42.708 [2024-11-20 07:15:24.938633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.708 [2024-11-20 07:15:24.940965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.708 [2024-11-20 07:15:24.941016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.708 pt2 00:18:42.708 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.709 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.968 malloc3 00:18:42.968 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.968 07:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.968 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.968 07:15:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.968 [2024-11-20 07:15:25.004389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.968 [2024-11-20 07:15:25.004490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.968 [2024-11-20 07:15:25.004528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:42.968 [2024-11-20 07:15:25.004557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.968 [2024-11-20 07:15:25.006852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.968 [2024-11-20 07:15:25.006930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.968 pt3 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.968 [2024-11-20 07:15:25.016446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:42.968 [2024-11-20 07:15:25.018507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.968 [2024-11-20 07:15:25.018626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.968 [2024-11-20 07:15:25.018853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:42.968 [2024-11-20 07:15:25.018912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:42.968 [2024-11-20 07:15:25.019211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:42.968 [2024-11-20 07:15:25.025480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:42.968 [2024-11-20 07:15:25.025557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:42.968 [2024-11-20 07:15:25.025893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.968 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.969 "name": "raid_bdev1", 00:18:42.969 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:42.969 "strip_size_kb": 64, 00:18:42.969 "state": "online", 00:18:42.969 "raid_level": "raid5f", 00:18:42.969 "superblock": true, 00:18:42.969 "num_base_bdevs": 3, 00:18:42.969 "num_base_bdevs_discovered": 3, 00:18:42.969 "num_base_bdevs_operational": 3, 00:18:42.969 "base_bdevs_list": [ 00:18:42.969 { 00:18:42.969 "name": "pt1", 00:18:42.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.969 "is_configured": true, 00:18:42.969 "data_offset": 2048, 00:18:42.969 "data_size": 63488 00:18:42.969 }, 00:18:42.969 { 00:18:42.969 "name": "pt2", 00:18:42.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.969 "is_configured": true, 00:18:42.969 "data_offset": 2048, 00:18:42.969 "data_size": 63488 00:18:42.969 }, 00:18:42.969 { 00:18:42.969 "name": "pt3", 00:18:42.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.969 "is_configured": true, 00:18:42.969 "data_offset": 2048, 00:18:42.969 "data_size": 63488 00:18:42.969 } 00:18:42.969 ] 00:18:42.969 }' 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.969 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.538 [2024-11-20 07:15:25.517146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.538 "name": "raid_bdev1", 00:18:43.538 "aliases": [ 00:18:43.538 "a31f9055-0aec-4a14-bd2d-54ec871cff1a" 00:18:43.538 ], 00:18:43.538 "product_name": "Raid Volume", 00:18:43.538 "block_size": 512, 00:18:43.538 "num_blocks": 126976, 00:18:43.538 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:43.538 "assigned_rate_limits": { 00:18:43.538 "rw_ios_per_sec": 0, 00:18:43.538 "rw_mbytes_per_sec": 0, 00:18:43.538 "r_mbytes_per_sec": 0, 00:18:43.538 "w_mbytes_per_sec": 0 00:18:43.538 }, 00:18:43.538 "claimed": false, 00:18:43.538 "zoned": false, 00:18:43.538 "supported_io_types": { 00:18:43.538 "read": true, 00:18:43.538 "write": true, 00:18:43.538 "unmap": false, 00:18:43.538 "flush": false, 00:18:43.538 "reset": true, 00:18:43.538 "nvme_admin": false, 00:18:43.538 "nvme_io": false, 00:18:43.538 "nvme_io_md": false, 00:18:43.538 "write_zeroes": true, 00:18:43.538 "zcopy": false, 00:18:43.538 "get_zone_info": false, 00:18:43.538 "zone_management": false, 00:18:43.538 "zone_append": false, 00:18:43.538 "compare": false, 00:18:43.538 "compare_and_write": false, 00:18:43.538 "abort": false, 00:18:43.538 "seek_hole": false, 00:18:43.538 "seek_data": false, 00:18:43.538 "copy": false, 00:18:43.538 "nvme_iov_md": false 00:18:43.538 }, 00:18:43.538 "driver_specific": { 00:18:43.538 "raid": { 00:18:43.538 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:43.538 "strip_size_kb": 64, 00:18:43.538 "state": "online", 00:18:43.538 "raid_level": "raid5f", 00:18:43.538 "superblock": true, 00:18:43.538 "num_base_bdevs": 3, 00:18:43.538 "num_base_bdevs_discovered": 3, 00:18:43.538 "num_base_bdevs_operational": 3, 00:18:43.538 "base_bdevs_list": [ 00:18:43.538 { 00:18:43.538 "name": "pt1", 00:18:43.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.538 "is_configured": true, 00:18:43.538 "data_offset": 2048, 00:18:43.538 "data_size": 63488 00:18:43.538 }, 00:18:43.538 { 00:18:43.538 "name": "pt2", 00:18:43.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.538 "is_configured": true, 00:18:43.538 "data_offset": 2048, 00:18:43.538 "data_size": 63488 00:18:43.538 }, 00:18:43.538 { 00:18:43.538 "name": "pt3", 00:18:43.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.538 "is_configured": true, 00:18:43.538 "data_offset": 2048, 00:18:43.538 "data_size": 63488 00:18:43.538 } 00:18:43.538 ] 00:18:43.538 } 00:18:43.538 } 00:18:43.538 }' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:43.538 pt2 00:18:43.538 pt3' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:43.538 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:43.539 [2024-11-20 07:15:25.784640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.539 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a31f9055-0aec-4a14-bd2d-54ec871cff1a 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a31f9055-0aec-4a14-bd2d-54ec871cff1a ']' 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.799 [2024-11-20 07:15:25.832393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.799 [2024-11-20 07:15:25.832488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.799 [2024-11-20 07:15:25.832590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.799 [2024-11-20 07:15:25.832691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.799 [2024-11-20 07:15:25.832702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.799 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.800 [2024-11-20 07:15:25.988146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:43.800 [2024-11-20 07:15:25.990196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:43.800 [2024-11-20 07:15:25.990257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:43.800 [2024-11-20 07:15:25.990314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:43.800 [2024-11-20 07:15:25.990384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:43.800 [2024-11-20 07:15:25.990408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:43.800 [2024-11-20 07:15:25.990427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.800 [2024-11-20 07:15:25.990437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:43.800 request: 00:18:43.800 { 00:18:43.800 "name": "raid_bdev1", 00:18:43.800 "raid_level": "raid5f", 00:18:43.800 "base_bdevs": [ 00:18:43.800 "malloc1", 00:18:43.800 "malloc2", 00:18:43.800 "malloc3" 00:18:43.800 ], 00:18:43.800 "strip_size_kb": 64, 00:18:43.800 "superblock": false, 00:18:43.800 "method": "bdev_raid_create", 00:18:43.800 "req_id": 1 00:18:43.800 } 00:18:43.800 Got JSON-RPC error response 00:18:43.800 response: 00:18:43.800 { 00:18:43.800 "code": -17, 00:18:43.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:43.800 } 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.800 07:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.800 [2024-11-20 07:15:26.055986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.800 [2024-11-20 07:15:26.056100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.800 [2024-11-20 07:15:26.056151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:43.800 [2024-11-20 07:15:26.056187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.800 [2024-11-20 07:15:26.058741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.800 [2024-11-20 07:15:26.058823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.800 [2024-11-20 07:15:26.058947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.800 [2024-11-20 07:15:26.059048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.800 pt1 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.800 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.059 "name": "raid_bdev1", 00:18:44.059 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:44.059 "strip_size_kb": 64, 00:18:44.059 "state": "configuring", 00:18:44.059 "raid_level": "raid5f", 00:18:44.059 "superblock": true, 00:18:44.059 "num_base_bdevs": 3, 00:18:44.059 "num_base_bdevs_discovered": 1, 00:18:44.059 "num_base_bdevs_operational": 3, 00:18:44.059 "base_bdevs_list": [ 00:18:44.059 { 00:18:44.059 "name": "pt1", 00:18:44.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.059 "is_configured": true, 00:18:44.059 "data_offset": 2048, 00:18:44.059 "data_size": 63488 00:18:44.059 }, 00:18:44.059 { 00:18:44.059 "name": null, 00:18:44.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.059 "is_configured": false, 00:18:44.059 "data_offset": 2048, 00:18:44.059 "data_size": 63488 00:18:44.059 }, 00:18:44.059 { 00:18:44.059 "name": null, 00:18:44.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.059 "is_configured": false, 00:18:44.059 "data_offset": 2048, 00:18:44.059 "data_size": 63488 00:18:44.059 } 00:18:44.059 ] 00:18:44.059 }' 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.059 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.318 [2024-11-20 07:15:26.491257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:44.318 [2024-11-20 07:15:26.491400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.318 [2024-11-20 07:15:26.491431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:44.318 [2024-11-20 07:15:26.491441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.318 [2024-11-20 07:15:26.491917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.318 [2024-11-20 07:15:26.491943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:44.318 [2024-11-20 07:15:26.492035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:44.318 [2024-11-20 07:15:26.492056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.318 pt2 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.318 [2024-11-20 07:15:26.503233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.318 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.319 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.319 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.319 "name": "raid_bdev1", 00:18:44.319 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:44.319 "strip_size_kb": 64, 00:18:44.319 "state": "configuring", 00:18:44.319 "raid_level": "raid5f", 00:18:44.319 "superblock": true, 00:18:44.319 "num_base_bdevs": 3, 00:18:44.319 "num_base_bdevs_discovered": 1, 00:18:44.319 "num_base_bdevs_operational": 3, 00:18:44.319 "base_bdevs_list": [ 00:18:44.319 { 00:18:44.319 "name": "pt1", 00:18:44.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.319 "is_configured": true, 00:18:44.319 "data_offset": 2048, 00:18:44.319 "data_size": 63488 00:18:44.319 }, 00:18:44.319 { 00:18:44.319 "name": null, 00:18:44.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.319 "is_configured": false, 00:18:44.319 "data_offset": 0, 00:18:44.319 "data_size": 63488 00:18:44.319 }, 00:18:44.319 { 00:18:44.319 "name": null, 00:18:44.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.319 "is_configured": false, 00:18:44.319 "data_offset": 2048, 00:18:44.319 "data_size": 63488 00:18:44.319 } 00:18:44.319 ] 00:18:44.319 }' 00:18:44.319 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.319 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.888 [2024-11-20 07:15:26.938497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:44.888 [2024-11-20 07:15:26.938648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.888 [2024-11-20 07:15:26.938691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:44.888 [2024-11-20 07:15:26.938757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.888 [2024-11-20 07:15:26.939295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.888 [2024-11-20 07:15:26.939387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:44.888 [2024-11-20 07:15:26.939521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:44.888 [2024-11-20 07:15:26.939581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.888 pt2 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.888 [2024-11-20 07:15:26.950473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:44.888 [2024-11-20 07:15:26.950570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.888 [2024-11-20 07:15:26.950608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:44.888 [2024-11-20 07:15:26.950640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.888 [2024-11-20 07:15:26.951105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.888 [2024-11-20 07:15:26.951180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:44.888 [2024-11-20 07:15:26.951285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:44.888 [2024-11-20 07:15:26.951361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:44.888 [2024-11-20 07:15:26.951549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:44.888 [2024-11-20 07:15:26.951597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:44.888 [2024-11-20 07:15:26.951903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:44.888 [2024-11-20 07:15:26.958485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:44.888 [2024-11-20 07:15:26.958550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:44.888 [2024-11-20 07:15:26.958839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.888 pt3 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.888 07:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.888 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.888 "name": "raid_bdev1", 00:18:44.888 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:44.888 "strip_size_kb": 64, 00:18:44.888 "state": "online", 00:18:44.888 "raid_level": "raid5f", 00:18:44.888 "superblock": true, 00:18:44.888 "num_base_bdevs": 3, 00:18:44.888 "num_base_bdevs_discovered": 3, 00:18:44.888 "num_base_bdevs_operational": 3, 00:18:44.888 "base_bdevs_list": [ 00:18:44.888 { 00:18:44.888 "name": "pt1", 00:18:44.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.888 "is_configured": true, 00:18:44.888 "data_offset": 2048, 00:18:44.888 "data_size": 63488 00:18:44.888 }, 00:18:44.888 { 00:18:44.888 "name": "pt2", 00:18:44.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.889 "is_configured": true, 00:18:44.889 "data_offset": 2048, 00:18:44.889 "data_size": 63488 00:18:44.889 }, 00:18:44.889 { 00:18:44.889 "name": "pt3", 00:18:44.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.889 "is_configured": true, 00:18:44.889 "data_offset": 2048, 00:18:44.889 "data_size": 63488 00:18:44.889 } 00:18:44.889 ] 00:18:44.889 }' 00:18:44.889 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.889 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.202 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.203 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 [2024-11-20 07:15:27.426178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.492 "name": "raid_bdev1", 00:18:45.492 "aliases": [ 00:18:45.492 "a31f9055-0aec-4a14-bd2d-54ec871cff1a" 00:18:45.492 ], 00:18:45.492 "product_name": "Raid Volume", 00:18:45.492 "block_size": 512, 00:18:45.492 "num_blocks": 126976, 00:18:45.492 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:45.492 "assigned_rate_limits": { 00:18:45.492 "rw_ios_per_sec": 0, 00:18:45.492 "rw_mbytes_per_sec": 0, 00:18:45.492 "r_mbytes_per_sec": 0, 00:18:45.492 "w_mbytes_per_sec": 0 00:18:45.492 }, 00:18:45.492 "claimed": false, 00:18:45.492 "zoned": false, 00:18:45.492 "supported_io_types": { 00:18:45.492 "read": true, 00:18:45.492 "write": true, 00:18:45.492 "unmap": false, 00:18:45.492 "flush": false, 00:18:45.492 "reset": true, 00:18:45.492 "nvme_admin": false, 00:18:45.492 "nvme_io": false, 00:18:45.492 "nvme_io_md": false, 00:18:45.492 "write_zeroes": true, 00:18:45.492 "zcopy": false, 00:18:45.492 "get_zone_info": false, 00:18:45.492 "zone_management": false, 00:18:45.492 "zone_append": false, 00:18:45.492 "compare": false, 00:18:45.492 "compare_and_write": false, 00:18:45.492 "abort": false, 00:18:45.492 "seek_hole": false, 00:18:45.492 "seek_data": false, 00:18:45.492 "copy": false, 00:18:45.492 "nvme_iov_md": false 00:18:45.492 }, 00:18:45.492 "driver_specific": { 00:18:45.492 "raid": { 00:18:45.492 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:45.492 "strip_size_kb": 64, 00:18:45.492 "state": "online", 00:18:45.492 "raid_level": "raid5f", 00:18:45.492 "superblock": true, 00:18:45.492 "num_base_bdevs": 3, 00:18:45.492 "num_base_bdevs_discovered": 3, 00:18:45.492 "num_base_bdevs_operational": 3, 00:18:45.492 "base_bdevs_list": [ 00:18:45.492 { 00:18:45.492 "name": "pt1", 00:18:45.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.492 "is_configured": true, 00:18:45.492 "data_offset": 2048, 00:18:45.492 "data_size": 63488 00:18:45.492 }, 00:18:45.492 { 00:18:45.492 "name": "pt2", 00:18:45.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.492 "is_configured": true, 00:18:45.492 "data_offset": 2048, 00:18:45.492 "data_size": 63488 00:18:45.492 }, 00:18:45.492 { 00:18:45.492 "name": "pt3", 00:18:45.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:45.492 "is_configured": true, 00:18:45.492 "data_offset": 2048, 00:18:45.492 "data_size": 63488 00:18:45.492 } 00:18:45.492 ] 00:18:45.492 } 00:18:45.492 } 00:18:45.492 }' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:45.492 pt2 00:18:45.492 pt3' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 [2024-11-20 07:15:27.717652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.492 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a31f9055-0aec-4a14-bd2d-54ec871cff1a '!=' a31f9055-0aec-4a14-bd2d-54ec871cff1a ']' 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.751 [2024-11-20 07:15:27.769414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.751 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.751 "name": "raid_bdev1", 00:18:45.751 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:45.751 "strip_size_kb": 64, 00:18:45.751 "state": "online", 00:18:45.751 "raid_level": "raid5f", 00:18:45.751 "superblock": true, 00:18:45.751 "num_base_bdevs": 3, 00:18:45.751 "num_base_bdevs_discovered": 2, 00:18:45.751 "num_base_bdevs_operational": 2, 00:18:45.751 "base_bdevs_list": [ 00:18:45.751 { 00:18:45.751 "name": null, 00:18:45.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.751 "is_configured": false, 00:18:45.751 "data_offset": 0, 00:18:45.751 "data_size": 63488 00:18:45.751 }, 00:18:45.751 { 00:18:45.751 "name": "pt2", 00:18:45.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.751 "is_configured": true, 00:18:45.751 "data_offset": 2048, 00:18:45.751 "data_size": 63488 00:18:45.751 }, 00:18:45.751 { 00:18:45.751 "name": "pt3", 00:18:45.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:45.751 "is_configured": true, 00:18:45.751 "data_offset": 2048, 00:18:45.751 "data_size": 63488 00:18:45.752 } 00:18:45.752 ] 00:18:45.752 }' 00:18:45.752 07:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.752 07:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.011 [2024-11-20 07:15:28.248522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.011 [2024-11-20 07:15:28.248609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.011 [2024-11-20 07:15:28.248728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.011 [2024-11-20 07:15:28.248861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.011 [2024-11-20 07:15:28.248942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.011 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.270 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.270 [2024-11-20 07:15:28.328363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.270 [2024-11-20 07:15:28.328424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.270 [2024-11-20 07:15:28.328443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:46.270 [2024-11-20 07:15:28.328455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.270 [2024-11-20 07:15:28.330956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.271 [2024-11-20 07:15:28.331044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.271 [2024-11-20 07:15:28.331135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:46.271 [2024-11-20 07:15:28.331199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.271 pt2 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.271 "name": "raid_bdev1", 00:18:46.271 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:46.271 "strip_size_kb": 64, 00:18:46.271 "state": "configuring", 00:18:46.271 "raid_level": "raid5f", 00:18:46.271 "superblock": true, 00:18:46.271 "num_base_bdevs": 3, 00:18:46.271 "num_base_bdevs_discovered": 1, 00:18:46.271 "num_base_bdevs_operational": 2, 00:18:46.271 "base_bdevs_list": [ 00:18:46.271 { 00:18:46.271 "name": null, 00:18:46.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.271 "is_configured": false, 00:18:46.271 "data_offset": 2048, 00:18:46.271 "data_size": 63488 00:18:46.271 }, 00:18:46.271 { 00:18:46.271 "name": "pt2", 00:18:46.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.271 "is_configured": true, 00:18:46.271 "data_offset": 2048, 00:18:46.271 "data_size": 63488 00:18:46.271 }, 00:18:46.271 { 00:18:46.271 "name": null, 00:18:46.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:46.271 "is_configured": false, 00:18:46.271 "data_offset": 2048, 00:18:46.271 "data_size": 63488 00:18:46.271 } 00:18:46.271 ] 00:18:46.271 }' 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.271 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.838 [2024-11-20 07:15:28.803578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:46.838 [2024-11-20 07:15:28.803720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.838 [2024-11-20 07:15:28.803753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:46.838 [2024-11-20 07:15:28.803767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.838 [2024-11-20 07:15:28.804276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.838 [2024-11-20 07:15:28.804302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:46.838 [2024-11-20 07:15:28.804412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:46.838 [2024-11-20 07:15:28.804450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:46.838 [2024-11-20 07:15:28.804599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.838 [2024-11-20 07:15:28.804612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:46.838 [2024-11-20 07:15:28.804911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.838 [2024-11-20 07:15:28.811548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.838 [2024-11-20 07:15:28.811586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:46.838 [2024-11-20 07:15:28.811974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.838 pt3 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.838 "name": "raid_bdev1", 00:18:46.838 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:46.838 "strip_size_kb": 64, 00:18:46.838 "state": "online", 00:18:46.838 "raid_level": "raid5f", 00:18:46.838 "superblock": true, 00:18:46.838 "num_base_bdevs": 3, 00:18:46.838 "num_base_bdevs_discovered": 2, 00:18:46.838 "num_base_bdevs_operational": 2, 00:18:46.838 "base_bdevs_list": [ 00:18:46.838 { 00:18:46.838 "name": null, 00:18:46.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.838 "is_configured": false, 00:18:46.838 "data_offset": 2048, 00:18:46.838 "data_size": 63488 00:18:46.838 }, 00:18:46.838 { 00:18:46.838 "name": "pt2", 00:18:46.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.838 "is_configured": true, 00:18:46.838 "data_offset": 2048, 00:18:46.838 "data_size": 63488 00:18:46.838 }, 00:18:46.838 { 00:18:46.838 "name": "pt3", 00:18:46.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:46.838 "is_configured": true, 00:18:46.838 "data_offset": 2048, 00:18:46.838 "data_size": 63488 00:18:46.838 } 00:18:46.838 ] 00:18:46.838 }' 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.838 07:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.097 [2024-11-20 07:15:29.311589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.097 [2024-11-20 07:15:29.311714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.097 [2024-11-20 07:15:29.311849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.097 [2024-11-20 07:15:29.311965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.097 [2024-11-20 07:15:29.312027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.097 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 [2024-11-20 07:15:29.387542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:47.357 [2024-11-20 07:15:29.387631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.357 [2024-11-20 07:15:29.387658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:47.357 [2024-11-20 07:15:29.387670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.357 [2024-11-20 07:15:29.390627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.357 [2024-11-20 07:15:29.390679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:47.357 [2024-11-20 07:15:29.390790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:47.357 [2024-11-20 07:15:29.390843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:47.357 [2024-11-20 07:15:29.391003] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:47.357 [2024-11-20 07:15:29.391016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.357 [2024-11-20 07:15:29.391035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:47.357 [2024-11-20 07:15:29.391163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.357 pt1 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.357 "name": "raid_bdev1", 00:18:47.357 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:47.357 "strip_size_kb": 64, 00:18:47.357 "state": "configuring", 00:18:47.357 "raid_level": "raid5f", 00:18:47.357 "superblock": true, 00:18:47.357 "num_base_bdevs": 3, 00:18:47.357 "num_base_bdevs_discovered": 1, 00:18:47.357 "num_base_bdevs_operational": 2, 00:18:47.357 "base_bdevs_list": [ 00:18:47.357 { 00:18:47.357 "name": null, 00:18:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.357 "is_configured": false, 00:18:47.357 "data_offset": 2048, 00:18:47.357 "data_size": 63488 00:18:47.357 }, 00:18:47.357 { 00:18:47.357 "name": "pt2", 00:18:47.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.357 "is_configured": true, 00:18:47.357 "data_offset": 2048, 00:18:47.357 "data_size": 63488 00:18:47.357 }, 00:18:47.357 { 00:18:47.357 "name": null, 00:18:47.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:47.357 "is_configured": false, 00:18:47.357 "data_offset": 2048, 00:18:47.357 "data_size": 63488 00:18:47.357 } 00:18:47.357 ] 00:18:47.357 }' 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.357 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.616 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:47.616 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:47.616 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.616 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.876 [2024-11-20 07:15:29.922638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:47.876 [2024-11-20 07:15:29.922793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.876 [2024-11-20 07:15:29.922845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:47.876 [2024-11-20 07:15:29.922891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.876 [2024-11-20 07:15:29.923514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.876 [2024-11-20 07:15:29.923582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:47.876 [2024-11-20 07:15:29.923746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:47.876 [2024-11-20 07:15:29.923811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:47.876 [2024-11-20 07:15:29.923991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:47.876 [2024-11-20 07:15:29.924040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:47.876 [2024-11-20 07:15:29.924408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:47.876 [2024-11-20 07:15:29.932289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:47.876 [2024-11-20 07:15:29.932419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:47.876 [2024-11-20 07:15:29.932856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.876 pt3 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.876 "name": "raid_bdev1", 00:18:47.876 "uuid": "a31f9055-0aec-4a14-bd2d-54ec871cff1a", 00:18:47.876 "strip_size_kb": 64, 00:18:47.876 "state": "online", 00:18:47.876 "raid_level": "raid5f", 00:18:47.876 "superblock": true, 00:18:47.876 "num_base_bdevs": 3, 00:18:47.876 "num_base_bdevs_discovered": 2, 00:18:47.876 "num_base_bdevs_operational": 2, 00:18:47.876 "base_bdevs_list": [ 00:18:47.876 { 00:18:47.876 "name": null, 00:18:47.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.876 "is_configured": false, 00:18:47.876 "data_offset": 2048, 00:18:47.876 "data_size": 63488 00:18:47.876 }, 00:18:47.876 { 00:18:47.876 "name": "pt2", 00:18:47.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.876 "is_configured": true, 00:18:47.876 "data_offset": 2048, 00:18:47.876 "data_size": 63488 00:18:47.876 }, 00:18:47.876 { 00:18:47.876 "name": "pt3", 00:18:47.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:47.876 "is_configured": true, 00:18:47.876 "data_offset": 2048, 00:18:47.876 "data_size": 63488 00:18:47.876 } 00:18:47.876 ] 00:18:47.876 }' 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.876 07:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.447 [2024-11-20 07:15:30.461567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a31f9055-0aec-4a14-bd2d-54ec871cff1a '!=' a31f9055-0aec-4a14-bd2d-54ec871cff1a ']' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81605 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81605 ']' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81605 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81605 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.447 killing process with pid 81605 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81605' 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81605 00:18:48.447 [2024-11-20 07:15:30.544028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.447 [2024-11-20 07:15:30.544147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.447 07:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81605 00:18:48.447 [2024-11-20 07:15:30.544226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.447 [2024-11-20 07:15:30.544242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:48.706 [2024-11-20 07:15:30.915044] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.088 ************************************ 00:18:50.088 END TEST raid5f_superblock_test 00:18:50.088 ************************************ 00:18:50.088 07:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:50.088 00:18:50.088 real 0m8.402s 00:18:50.088 user 0m13.092s 00:18:50.088 sys 0m1.442s 00:18:50.088 07:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.088 07:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 07:15:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:50.088 07:15:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:50.088 07:15:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:50.088 07:15:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.088 07:15:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 ************************************ 00:18:50.088 START TEST raid5f_rebuild_test 00:18:50.088 ************************************ 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82050 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82050 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82050 ']' 00:18:50.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.088 07:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.348 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:50.348 Zero copy mechanism will not be used. 00:18:50.348 [2024-11-20 07:15:32.413513] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:50.348 [2024-11-20 07:15:32.413636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82050 ] 00:18:50.348 [2024-11-20 07:15:32.591524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.607 [2024-11-20 07:15:32.716892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.866 [2024-11-20 07:15:32.947881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.866 [2024-11-20 07:15:32.948042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.126 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 BaseBdev1_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 [2024-11-20 07:15:33.406418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.387 [2024-11-20 07:15:33.406504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.387 [2024-11-20 07:15:33.406536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.387 [2024-11-20 07:15:33.406550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.387 [2024-11-20 07:15:33.409128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.387 [2024-11-20 07:15:33.409244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.387 BaseBdev1 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 BaseBdev2_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 [2024-11-20 07:15:33.464155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:51.387 [2024-11-20 07:15:33.464218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.387 [2024-11-20 07:15:33.464237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:51.387 [2024-11-20 07:15:33.464250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.387 [2024-11-20 07:15:33.466659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.387 [2024-11-20 07:15:33.466705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:51.387 BaseBdev2 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 BaseBdev3_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 [2024-11-20 07:15:33.533141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:51.387 [2024-11-20 07:15:33.533204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.387 [2024-11-20 07:15:33.533230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:51.387 [2024-11-20 07:15:33.533243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.387 [2024-11-20 07:15:33.535608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.387 [2024-11-20 07:15:33.535648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:51.387 BaseBdev3 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 spare_malloc 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 spare_delay 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 [2024-11-20 07:15:33.607416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.387 [2024-11-20 07:15:33.607483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.387 [2024-11-20 07:15:33.607505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:51.387 [2024-11-20 07:15:33.607519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.387 [2024-11-20 07:15:33.610036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.387 [2024-11-20 07:15:33.610087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.387 spare 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.387 [2024-11-20 07:15:33.619481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.387 [2024-11-20 07:15:33.621661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.387 [2024-11-20 07:15:33.621743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.387 [2024-11-20 07:15:33.621858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:51.387 [2024-11-20 07:15:33.621872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:51.387 [2024-11-20 07:15:33.622207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.387 [2024-11-20 07:15:33.628920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:51.387 [2024-11-20 07:15:33.628956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:51.387 [2024-11-20 07:15:33.629234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.387 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.647 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.647 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.647 "name": "raid_bdev1", 00:18:51.647 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:51.647 "strip_size_kb": 64, 00:18:51.647 "state": "online", 00:18:51.647 "raid_level": "raid5f", 00:18:51.647 "superblock": false, 00:18:51.647 "num_base_bdevs": 3, 00:18:51.647 "num_base_bdevs_discovered": 3, 00:18:51.647 "num_base_bdevs_operational": 3, 00:18:51.647 "base_bdevs_list": [ 00:18:51.647 { 00:18:51.647 "name": "BaseBdev1", 00:18:51.647 "uuid": "6472730c-5f01-518f-88df-bbf49c23008d", 00:18:51.647 "is_configured": true, 00:18:51.647 "data_offset": 0, 00:18:51.647 "data_size": 65536 00:18:51.647 }, 00:18:51.647 { 00:18:51.647 "name": "BaseBdev2", 00:18:51.647 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:51.647 "is_configured": true, 00:18:51.647 "data_offset": 0, 00:18:51.647 "data_size": 65536 00:18:51.647 }, 00:18:51.647 { 00:18:51.647 "name": "BaseBdev3", 00:18:51.647 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:51.647 "is_configured": true, 00:18:51.647 "data_offset": 0, 00:18:51.647 "data_size": 65536 00:18:51.647 } 00:18:51.647 ] 00:18:51.647 }' 00:18:51.647 07:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.647 07:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:51.906 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.906 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:51.906 [2024-11-20 07:15:34.132366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.906 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.166 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:52.166 [2024-11-20 07:15:34.415683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:52.426 /dev/nbd0 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.426 1+0 records in 00:18:52.426 1+0 records out 00:18:52.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409494 s, 10.0 MB/s 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:52.426 07:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:52.995 512+0 records in 00:18:52.995 512+0 records out 00:18:52.995 67108864 bytes (67 MB, 64 MiB) copied, 0.519731 s, 129 MB/s 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.995 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:52.995 [2024-11-20 07:15:35.249608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.255 [2024-11-20 07:15:35.298345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.255 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.255 "name": "raid_bdev1", 00:18:53.255 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:53.255 "strip_size_kb": 64, 00:18:53.255 "state": "online", 00:18:53.255 "raid_level": "raid5f", 00:18:53.255 "superblock": false, 00:18:53.255 "num_base_bdevs": 3, 00:18:53.255 "num_base_bdevs_discovered": 2, 00:18:53.255 "num_base_bdevs_operational": 2, 00:18:53.255 "base_bdevs_list": [ 00:18:53.255 { 00:18:53.256 "name": null, 00:18:53.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.256 "is_configured": false, 00:18:53.256 "data_offset": 0, 00:18:53.256 "data_size": 65536 00:18:53.256 }, 00:18:53.256 { 00:18:53.256 "name": "BaseBdev2", 00:18:53.256 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 0, 00:18:53.256 "data_size": 65536 00:18:53.256 }, 00:18:53.256 { 00:18:53.256 "name": "BaseBdev3", 00:18:53.256 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 0, 00:18:53.256 "data_size": 65536 00:18:53.256 } 00:18:53.256 ] 00:18:53.256 }' 00:18:53.256 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.256 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.825 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.825 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 [2024-11-20 07:15:35.789525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.825 [2024-11-20 07:15:35.809040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:53.825 07:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.825 07:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:53.825 [2024-11-20 07:15:35.818498] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.764 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.764 "name": "raid_bdev1", 00:18:54.764 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:54.764 "strip_size_kb": 64, 00:18:54.764 "state": "online", 00:18:54.764 "raid_level": "raid5f", 00:18:54.764 "superblock": false, 00:18:54.764 "num_base_bdevs": 3, 00:18:54.764 "num_base_bdevs_discovered": 3, 00:18:54.764 "num_base_bdevs_operational": 3, 00:18:54.764 "process": { 00:18:54.764 "type": "rebuild", 00:18:54.764 "target": "spare", 00:18:54.764 "progress": { 00:18:54.764 "blocks": 20480, 00:18:54.764 "percent": 15 00:18:54.764 } 00:18:54.764 }, 00:18:54.764 "base_bdevs_list": [ 00:18:54.764 { 00:18:54.764 "name": "spare", 00:18:54.764 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:18:54.764 "is_configured": true, 00:18:54.764 "data_offset": 0, 00:18:54.764 "data_size": 65536 00:18:54.764 }, 00:18:54.764 { 00:18:54.764 "name": "BaseBdev2", 00:18:54.764 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:54.764 "is_configured": true, 00:18:54.764 "data_offset": 0, 00:18:54.764 "data_size": 65536 00:18:54.764 }, 00:18:54.764 { 00:18:54.764 "name": "BaseBdev3", 00:18:54.764 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:54.764 "is_configured": true, 00:18:54.764 "data_offset": 0, 00:18:54.764 "data_size": 65536 00:18:54.764 } 00:18:54.764 ] 00:18:54.764 }' 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.765 07:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.765 [2024-11-20 07:15:36.971000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.024 [2024-11-20 07:15:37.031216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:55.024 [2024-11-20 07:15:37.031315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.024 [2024-11-20 07:15:37.031363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.024 [2024-11-20 07:15:37.031375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.024 "name": "raid_bdev1", 00:18:55.024 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:55.024 "strip_size_kb": 64, 00:18:55.024 "state": "online", 00:18:55.024 "raid_level": "raid5f", 00:18:55.024 "superblock": false, 00:18:55.024 "num_base_bdevs": 3, 00:18:55.024 "num_base_bdevs_discovered": 2, 00:18:55.024 "num_base_bdevs_operational": 2, 00:18:55.024 "base_bdevs_list": [ 00:18:55.024 { 00:18:55.024 "name": null, 00:18:55.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.024 "is_configured": false, 00:18:55.024 "data_offset": 0, 00:18:55.024 "data_size": 65536 00:18:55.024 }, 00:18:55.024 { 00:18:55.024 "name": "BaseBdev2", 00:18:55.024 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:55.024 "is_configured": true, 00:18:55.024 "data_offset": 0, 00:18:55.024 "data_size": 65536 00:18:55.024 }, 00:18:55.024 { 00:18:55.024 "name": "BaseBdev3", 00:18:55.024 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:55.024 "is_configured": true, 00:18:55.024 "data_offset": 0, 00:18:55.024 "data_size": 65536 00:18:55.024 } 00:18:55.024 ] 00:18:55.024 }' 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.024 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.283 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.542 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.542 "name": "raid_bdev1", 00:18:55.542 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:55.542 "strip_size_kb": 64, 00:18:55.542 "state": "online", 00:18:55.542 "raid_level": "raid5f", 00:18:55.542 "superblock": false, 00:18:55.542 "num_base_bdevs": 3, 00:18:55.542 "num_base_bdevs_discovered": 2, 00:18:55.542 "num_base_bdevs_operational": 2, 00:18:55.542 "base_bdevs_list": [ 00:18:55.542 { 00:18:55.542 "name": null, 00:18:55.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.542 "is_configured": false, 00:18:55.542 "data_offset": 0, 00:18:55.542 "data_size": 65536 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "name": "BaseBdev2", 00:18:55.542 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:55.542 "is_configured": true, 00:18:55.542 "data_offset": 0, 00:18:55.542 "data_size": 65536 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "name": "BaseBdev3", 00:18:55.542 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:55.543 "is_configured": true, 00:18:55.543 "data_offset": 0, 00:18:55.543 "data_size": 65536 00:18:55.543 } 00:18:55.543 ] 00:18:55.543 }' 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.543 [2024-11-20 07:15:37.684917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.543 [2024-11-20 07:15:37.705189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.543 07:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:55.543 [2024-11-20 07:15:37.715513] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.478 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.738 "name": "raid_bdev1", 00:18:56.738 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:56.738 "strip_size_kb": 64, 00:18:56.738 "state": "online", 00:18:56.738 "raid_level": "raid5f", 00:18:56.738 "superblock": false, 00:18:56.738 "num_base_bdevs": 3, 00:18:56.738 "num_base_bdevs_discovered": 3, 00:18:56.738 "num_base_bdevs_operational": 3, 00:18:56.738 "process": { 00:18:56.738 "type": "rebuild", 00:18:56.738 "target": "spare", 00:18:56.738 "progress": { 00:18:56.738 "blocks": 18432, 00:18:56.738 "percent": 14 00:18:56.738 } 00:18:56.738 }, 00:18:56.738 "base_bdevs_list": [ 00:18:56.738 { 00:18:56.738 "name": "spare", 00:18:56.738 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 }, 00:18:56.738 { 00:18:56.738 "name": "BaseBdev2", 00:18:56.738 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 }, 00:18:56.738 { 00:18:56.738 "name": "BaseBdev3", 00:18:56.738 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 } 00:18:56.738 ] 00:18:56.738 }' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=574 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.738 "name": "raid_bdev1", 00:18:56.738 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:56.738 "strip_size_kb": 64, 00:18:56.738 "state": "online", 00:18:56.738 "raid_level": "raid5f", 00:18:56.738 "superblock": false, 00:18:56.738 "num_base_bdevs": 3, 00:18:56.738 "num_base_bdevs_discovered": 3, 00:18:56.738 "num_base_bdevs_operational": 3, 00:18:56.738 "process": { 00:18:56.738 "type": "rebuild", 00:18:56.738 "target": "spare", 00:18:56.738 "progress": { 00:18:56.738 "blocks": 22528, 00:18:56.738 "percent": 17 00:18:56.738 } 00:18:56.738 }, 00:18:56.738 "base_bdevs_list": [ 00:18:56.738 { 00:18:56.738 "name": "spare", 00:18:56.738 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 }, 00:18:56.738 { 00:18:56.738 "name": "BaseBdev2", 00:18:56.738 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 }, 00:18:56.738 { 00:18:56.738 "name": "BaseBdev3", 00:18:56.738 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:56.738 "is_configured": true, 00:18:56.738 "data_offset": 0, 00:18:56.738 "data_size": 65536 00:18:56.738 } 00:18:56.738 ] 00:18:56.738 }' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.738 07:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.000 07:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.000 07:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.939 "name": "raid_bdev1", 00:18:57.939 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:57.939 "strip_size_kb": 64, 00:18:57.939 "state": "online", 00:18:57.939 "raid_level": "raid5f", 00:18:57.939 "superblock": false, 00:18:57.939 "num_base_bdevs": 3, 00:18:57.939 "num_base_bdevs_discovered": 3, 00:18:57.939 "num_base_bdevs_operational": 3, 00:18:57.939 "process": { 00:18:57.939 "type": "rebuild", 00:18:57.939 "target": "spare", 00:18:57.939 "progress": { 00:18:57.939 "blocks": 47104, 00:18:57.939 "percent": 35 00:18:57.939 } 00:18:57.939 }, 00:18:57.939 "base_bdevs_list": [ 00:18:57.939 { 00:18:57.939 "name": "spare", 00:18:57.939 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:18:57.939 "is_configured": true, 00:18:57.939 "data_offset": 0, 00:18:57.939 "data_size": 65536 00:18:57.939 }, 00:18:57.939 { 00:18:57.939 "name": "BaseBdev2", 00:18:57.939 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:57.939 "is_configured": true, 00:18:57.939 "data_offset": 0, 00:18:57.939 "data_size": 65536 00:18:57.939 }, 00:18:57.939 { 00:18:57.939 "name": "BaseBdev3", 00:18:57.939 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:57.939 "is_configured": true, 00:18:57.939 "data_offset": 0, 00:18:57.939 "data_size": 65536 00:18:57.939 } 00:18:57.939 ] 00:18:57.939 }' 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.939 07:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.317 "name": "raid_bdev1", 00:18:59.317 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:18:59.317 "strip_size_kb": 64, 00:18:59.317 "state": "online", 00:18:59.317 "raid_level": "raid5f", 00:18:59.317 "superblock": false, 00:18:59.317 "num_base_bdevs": 3, 00:18:59.317 "num_base_bdevs_discovered": 3, 00:18:59.317 "num_base_bdevs_operational": 3, 00:18:59.317 "process": { 00:18:59.317 "type": "rebuild", 00:18:59.317 "target": "spare", 00:18:59.317 "progress": { 00:18:59.317 "blocks": 69632, 00:18:59.317 "percent": 53 00:18:59.317 } 00:18:59.317 }, 00:18:59.317 "base_bdevs_list": [ 00:18:59.317 { 00:18:59.317 "name": "spare", 00:18:59.317 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:18:59.317 "is_configured": true, 00:18:59.317 "data_offset": 0, 00:18:59.317 "data_size": 65536 00:18:59.317 }, 00:18:59.317 { 00:18:59.317 "name": "BaseBdev2", 00:18:59.317 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:18:59.317 "is_configured": true, 00:18:59.317 "data_offset": 0, 00:18:59.317 "data_size": 65536 00:18:59.317 }, 00:18:59.317 { 00:18:59.317 "name": "BaseBdev3", 00:18:59.317 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:18:59.317 "is_configured": true, 00:18:59.317 "data_offset": 0, 00:18:59.317 "data_size": 65536 00:18:59.317 } 00:18:59.317 ] 00:18:59.317 }' 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.317 07:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.276 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.276 "name": "raid_bdev1", 00:19:00.276 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:19:00.276 "strip_size_kb": 64, 00:19:00.276 "state": "online", 00:19:00.277 "raid_level": "raid5f", 00:19:00.277 "superblock": false, 00:19:00.277 "num_base_bdevs": 3, 00:19:00.277 "num_base_bdevs_discovered": 3, 00:19:00.277 "num_base_bdevs_operational": 3, 00:19:00.277 "process": { 00:19:00.277 "type": "rebuild", 00:19:00.277 "target": "spare", 00:19:00.277 "progress": { 00:19:00.277 "blocks": 92160, 00:19:00.277 "percent": 70 00:19:00.277 } 00:19:00.277 }, 00:19:00.277 "base_bdevs_list": [ 00:19:00.277 { 00:19:00.277 "name": "spare", 00:19:00.277 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:19:00.277 "is_configured": true, 00:19:00.277 "data_offset": 0, 00:19:00.277 "data_size": 65536 00:19:00.277 }, 00:19:00.277 { 00:19:00.277 "name": "BaseBdev2", 00:19:00.277 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:19:00.277 "is_configured": true, 00:19:00.277 "data_offset": 0, 00:19:00.277 "data_size": 65536 00:19:00.277 }, 00:19:00.277 { 00:19:00.277 "name": "BaseBdev3", 00:19:00.277 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:19:00.277 "is_configured": true, 00:19:00.277 "data_offset": 0, 00:19:00.277 "data_size": 65536 00:19:00.277 } 00:19:00.277 ] 00:19:00.277 }' 00:19:00.277 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.277 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.277 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.277 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.277 07:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.212 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.471 "name": "raid_bdev1", 00:19:01.471 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:19:01.471 "strip_size_kb": 64, 00:19:01.471 "state": "online", 00:19:01.471 "raid_level": "raid5f", 00:19:01.471 "superblock": false, 00:19:01.471 "num_base_bdevs": 3, 00:19:01.471 "num_base_bdevs_discovered": 3, 00:19:01.471 "num_base_bdevs_operational": 3, 00:19:01.471 "process": { 00:19:01.471 "type": "rebuild", 00:19:01.471 "target": "spare", 00:19:01.471 "progress": { 00:19:01.471 "blocks": 116736, 00:19:01.471 "percent": 89 00:19:01.471 } 00:19:01.471 }, 00:19:01.471 "base_bdevs_list": [ 00:19:01.471 { 00:19:01.471 "name": "spare", 00:19:01.471 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:19:01.471 "is_configured": true, 00:19:01.471 "data_offset": 0, 00:19:01.471 "data_size": 65536 00:19:01.471 }, 00:19:01.471 { 00:19:01.471 "name": "BaseBdev2", 00:19:01.471 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:19:01.471 "is_configured": true, 00:19:01.471 "data_offset": 0, 00:19:01.471 "data_size": 65536 00:19:01.471 }, 00:19:01.471 { 00:19:01.471 "name": "BaseBdev3", 00:19:01.471 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:19:01.471 "is_configured": true, 00:19:01.471 "data_offset": 0, 00:19:01.471 "data_size": 65536 00:19:01.471 } 00:19:01.471 ] 00:19:01.471 }' 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.471 07:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:02.039 [2024-11-20 07:15:44.179945] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:02.039 [2024-11-20 07:15:44.180069] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:02.039 [2024-11-20 07:15:44.180128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.607 "name": "raid_bdev1", 00:19:02.607 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:19:02.607 "strip_size_kb": 64, 00:19:02.607 "state": "online", 00:19:02.607 "raid_level": "raid5f", 00:19:02.607 "superblock": false, 00:19:02.607 "num_base_bdevs": 3, 00:19:02.607 "num_base_bdevs_discovered": 3, 00:19:02.607 "num_base_bdevs_operational": 3, 00:19:02.607 "base_bdevs_list": [ 00:19:02.607 { 00:19:02.607 "name": "spare", 00:19:02.607 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:19:02.607 "is_configured": true, 00:19:02.607 "data_offset": 0, 00:19:02.607 "data_size": 65536 00:19:02.607 }, 00:19:02.607 { 00:19:02.607 "name": "BaseBdev2", 00:19:02.607 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:19:02.607 "is_configured": true, 00:19:02.607 "data_offset": 0, 00:19:02.607 "data_size": 65536 00:19:02.607 }, 00:19:02.607 { 00:19:02.607 "name": "BaseBdev3", 00:19:02.607 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:19:02.607 "is_configured": true, 00:19:02.607 "data_offset": 0, 00:19:02.607 "data_size": 65536 00:19:02.607 } 00:19:02.607 ] 00:19:02.607 }' 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.607 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.607 "name": "raid_bdev1", 00:19:02.607 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:19:02.607 "strip_size_kb": 64, 00:19:02.607 "state": "online", 00:19:02.607 "raid_level": "raid5f", 00:19:02.608 "superblock": false, 00:19:02.608 "num_base_bdevs": 3, 00:19:02.608 "num_base_bdevs_discovered": 3, 00:19:02.608 "num_base_bdevs_operational": 3, 00:19:02.608 "base_bdevs_list": [ 00:19:02.608 { 00:19:02.608 "name": "spare", 00:19:02.608 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:19:02.608 "is_configured": true, 00:19:02.608 "data_offset": 0, 00:19:02.608 "data_size": 65536 00:19:02.608 }, 00:19:02.608 { 00:19:02.608 "name": "BaseBdev2", 00:19:02.608 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:19:02.608 "is_configured": true, 00:19:02.608 "data_offset": 0, 00:19:02.608 "data_size": 65536 00:19:02.608 }, 00:19:02.608 { 00:19:02.608 "name": "BaseBdev3", 00:19:02.608 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:19:02.608 "is_configured": true, 00:19:02.608 "data_offset": 0, 00:19:02.608 "data_size": 65536 00:19:02.608 } 00:19:02.608 ] 00:19:02.608 }' 00:19:02.608 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.869 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.869 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.870 "name": "raid_bdev1", 00:19:02.870 "uuid": "400f3762-ee4f-45ed-9588-1aace70e7638", 00:19:02.870 "strip_size_kb": 64, 00:19:02.870 "state": "online", 00:19:02.870 "raid_level": "raid5f", 00:19:02.870 "superblock": false, 00:19:02.870 "num_base_bdevs": 3, 00:19:02.870 "num_base_bdevs_discovered": 3, 00:19:02.870 "num_base_bdevs_operational": 3, 00:19:02.870 "base_bdevs_list": [ 00:19:02.870 { 00:19:02.870 "name": "spare", 00:19:02.870 "uuid": "1bb85156-e635-5a1d-9bb1-ebeda48288cc", 00:19:02.870 "is_configured": true, 00:19:02.870 "data_offset": 0, 00:19:02.870 "data_size": 65536 00:19:02.870 }, 00:19:02.870 { 00:19:02.870 "name": "BaseBdev2", 00:19:02.870 "uuid": "32686019-5e56-5941-9081-99db516bd699", 00:19:02.870 "is_configured": true, 00:19:02.870 "data_offset": 0, 00:19:02.870 "data_size": 65536 00:19:02.870 }, 00:19:02.870 { 00:19:02.870 "name": "BaseBdev3", 00:19:02.870 "uuid": "13aaeeb8-ba4c-5003-943b-173c028359e8", 00:19:02.870 "is_configured": true, 00:19:02.870 "data_offset": 0, 00:19:02.870 "data_size": 65536 00:19:02.870 } 00:19:02.870 ] 00:19:02.870 }' 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.870 07:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.436 [2024-11-20 07:15:45.424861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.436 [2024-11-20 07:15:45.424957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.436 [2024-11-20 07:15:45.425072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.436 [2024-11-20 07:15:45.425175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.436 [2024-11-20 07:15:45.425194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.436 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:03.696 /dev/nbd0 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.696 1+0 records in 00:19:03.696 1+0 records out 00:19:03.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438689 s, 9.3 MB/s 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.696 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:03.956 /dev/nbd1 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.956 07:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.956 1+0 records in 00:19:03.956 1+0 records out 00:19:03.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422646 s, 9.7 MB/s 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.956 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.216 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.217 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.217 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.217 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.217 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82050 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82050 ']' 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82050 00:19:04.476 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82050 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82050' 00:19:04.735 killing process with pid 82050 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82050 00:19:04.735 Received shutdown signal, test time was about 60.000000 seconds 00:19:04.735 00:19:04.735 Latency(us) 00:19:04.735 [2024-11-20T07:15:47.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.735 [2024-11-20T07:15:47.000Z] =================================================================================================================== 00:19:04.735 [2024-11-20T07:15:47.000Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.735 07:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82050 00:19:04.735 [2024-11-20 07:15:46.763278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.997 [2024-11-20 07:15:47.214021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.377 ************************************ 00:19:06.377 END TEST raid5f_rebuild_test 00:19:06.377 ************************************ 00:19:06.377 07:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:06.377 00:19:06.377 real 0m16.157s 00:19:06.377 user 0m19.911s 00:19:06.377 sys 0m2.244s 00:19:06.377 07:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.377 07:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.377 07:15:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:06.378 07:15:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:06.378 07:15:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.378 07:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.378 ************************************ 00:19:06.378 START TEST raid5f_rebuild_test_sb 00:19:06.378 ************************************ 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82509 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82509 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82509 ']' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.378 07:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.378 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:06.378 Zero copy mechanism will not be used. 00:19:06.378 [2024-11-20 07:15:48.636851] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:06.378 [2024-11-20 07:15:48.636990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82509 ] 00:19:06.639 [2024-11-20 07:15:48.815757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.900 [2024-11-20 07:15:48.942848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.158 [2024-11-20 07:15:49.176577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.158 [2024-11-20 07:15:49.176631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.417 BaseBdev1_malloc 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.417 [2024-11-20 07:15:49.595430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.417 [2024-11-20 07:15:49.595618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.417 [2024-11-20 07:15:49.595659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:07.417 [2024-11-20 07:15:49.595674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.417 [2024-11-20 07:15:49.598332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.417 [2024-11-20 07:15:49.598402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.417 BaseBdev1 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.417 BaseBdev2_malloc 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.417 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.417 [2024-11-20 07:15:49.657901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:07.417 [2024-11-20 07:15:49.657972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.417 [2024-11-20 07:15:49.657994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:07.417 [2024-11-20 07:15:49.658009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.418 [2024-11-20 07:15:49.660475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.418 [2024-11-20 07:15:49.660533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:07.418 BaseBdev2 00:19:07.418 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.418 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.418 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:07.418 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.418 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 BaseBdev3_malloc 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 [2024-11-20 07:15:49.740373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:07.679 [2024-11-20 07:15:49.740442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.679 [2024-11-20 07:15:49.740470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:07.679 [2024-11-20 07:15:49.740483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.679 [2024-11-20 07:15:49.742908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.679 [2024-11-20 07:15:49.742958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:07.679 BaseBdev3 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 spare_malloc 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 spare_delay 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 [2024-11-20 07:15:49.814252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:07.679 [2024-11-20 07:15:49.814396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.679 [2024-11-20 07:15:49.814422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:07.679 [2024-11-20 07:15:49.814435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.679 [2024-11-20 07:15:49.816926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.679 [2024-11-20 07:15:49.816978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:07.679 spare 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 [2024-11-20 07:15:49.826334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.679 [2024-11-20 07:15:49.828433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.679 [2024-11-20 07:15:49.828505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.679 [2024-11-20 07:15:49.828710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:07.679 [2024-11-20 07:15:49.828726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:07.679 [2024-11-20 07:15:49.829074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:07.679 [2024-11-20 07:15:49.835716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:07.679 [2024-11-20 07:15:49.835748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:07.679 [2024-11-20 07:15:49.836029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.679 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.679 "name": "raid_bdev1", 00:19:07.679 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:07.679 "strip_size_kb": 64, 00:19:07.679 "state": "online", 00:19:07.679 "raid_level": "raid5f", 00:19:07.679 "superblock": true, 00:19:07.679 "num_base_bdevs": 3, 00:19:07.679 "num_base_bdevs_discovered": 3, 00:19:07.679 "num_base_bdevs_operational": 3, 00:19:07.679 "base_bdevs_list": [ 00:19:07.679 { 00:19:07.679 "name": "BaseBdev1", 00:19:07.679 "uuid": "deb630b3-e566-52ef-b0f4-da7e68d545b7", 00:19:07.679 "is_configured": true, 00:19:07.679 "data_offset": 2048, 00:19:07.679 "data_size": 63488 00:19:07.679 }, 00:19:07.679 { 00:19:07.679 "name": "BaseBdev2", 00:19:07.679 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:07.679 "is_configured": true, 00:19:07.679 "data_offset": 2048, 00:19:07.679 "data_size": 63488 00:19:07.679 }, 00:19:07.679 { 00:19:07.679 "name": "BaseBdev3", 00:19:07.679 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:07.679 "is_configured": true, 00:19:07.679 "data_offset": 2048, 00:19:07.679 "data_size": 63488 00:19:07.679 } 00:19:07.679 ] 00:19:07.679 }' 00:19:07.680 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.680 07:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.249 [2024-11-20 07:15:50.243142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.249 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:08.510 [2024-11-20 07:15:50.546539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:08.510 /dev/nbd0 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.510 1+0 records in 00:19:08.510 1+0 records out 00:19:08.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056864 s, 7.2 MB/s 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:08.510 07:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:09.080 496+0 records in 00:19:09.080 496+0 records out 00:19:09.080 65011712 bytes (65 MB, 62 MiB) copied, 0.483422 s, 134 MB/s 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.080 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.339 [2024-11-20 07:15:51.358452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.339 [2024-11-20 07:15:51.371216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.339 "name": "raid_bdev1", 00:19:09.339 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:09.339 "strip_size_kb": 64, 00:19:09.339 "state": "online", 00:19:09.339 "raid_level": "raid5f", 00:19:09.339 "superblock": true, 00:19:09.339 "num_base_bdevs": 3, 00:19:09.339 "num_base_bdevs_discovered": 2, 00:19:09.339 "num_base_bdevs_operational": 2, 00:19:09.339 "base_bdevs_list": [ 00:19:09.339 { 00:19:09.339 "name": null, 00:19:09.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.339 "is_configured": false, 00:19:09.339 "data_offset": 0, 00:19:09.339 "data_size": 63488 00:19:09.339 }, 00:19:09.339 { 00:19:09.339 "name": "BaseBdev2", 00:19:09.339 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:09.339 "is_configured": true, 00:19:09.339 "data_offset": 2048, 00:19:09.339 "data_size": 63488 00:19:09.339 }, 00:19:09.339 { 00:19:09.339 "name": "BaseBdev3", 00:19:09.339 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:09.339 "is_configured": true, 00:19:09.339 "data_offset": 2048, 00:19:09.339 "data_size": 63488 00:19:09.339 } 00:19:09.339 ] 00:19:09.339 }' 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.339 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.599 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.599 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.599 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.599 [2024-11-20 07:15:51.834486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.599 [2024-11-20 07:15:51.853540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:09.599 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.599 07:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:09.857 [2024-11-20 07:15:51.862450] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.798 "name": "raid_bdev1", 00:19:10.798 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:10.798 "strip_size_kb": 64, 00:19:10.798 "state": "online", 00:19:10.798 "raid_level": "raid5f", 00:19:10.798 "superblock": true, 00:19:10.798 "num_base_bdevs": 3, 00:19:10.798 "num_base_bdevs_discovered": 3, 00:19:10.798 "num_base_bdevs_operational": 3, 00:19:10.798 "process": { 00:19:10.798 "type": "rebuild", 00:19:10.798 "target": "spare", 00:19:10.798 "progress": { 00:19:10.798 "blocks": 18432, 00:19:10.798 "percent": 14 00:19:10.798 } 00:19:10.798 }, 00:19:10.798 "base_bdevs_list": [ 00:19:10.798 { 00:19:10.798 "name": "spare", 00:19:10.798 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:10.798 "is_configured": true, 00:19:10.798 "data_offset": 2048, 00:19:10.798 "data_size": 63488 00:19:10.798 }, 00:19:10.798 { 00:19:10.798 "name": "BaseBdev2", 00:19:10.798 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:10.798 "is_configured": true, 00:19:10.798 "data_offset": 2048, 00:19:10.798 "data_size": 63488 00:19:10.798 }, 00:19:10.798 { 00:19:10.798 "name": "BaseBdev3", 00:19:10.798 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:10.798 "is_configured": true, 00:19:10.798 "data_offset": 2048, 00:19:10.798 "data_size": 63488 00:19:10.798 } 00:19:10.798 ] 00:19:10.798 }' 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.798 07:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.798 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.798 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:10.798 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.798 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.798 [2024-11-20 07:15:53.022142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.058 [2024-11-20 07:15:53.074330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.058 [2024-11-20 07:15:53.074434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.058 [2024-11-20 07:15:53.074457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.058 [2024-11-20 07:15:53.074467] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.058 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.058 "name": "raid_bdev1", 00:19:11.058 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:11.058 "strip_size_kb": 64, 00:19:11.058 "state": "online", 00:19:11.058 "raid_level": "raid5f", 00:19:11.058 "superblock": true, 00:19:11.058 "num_base_bdevs": 3, 00:19:11.058 "num_base_bdevs_discovered": 2, 00:19:11.059 "num_base_bdevs_operational": 2, 00:19:11.059 "base_bdevs_list": [ 00:19:11.059 { 00:19:11.059 "name": null, 00:19:11.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.059 "is_configured": false, 00:19:11.059 "data_offset": 0, 00:19:11.059 "data_size": 63488 00:19:11.059 }, 00:19:11.059 { 00:19:11.059 "name": "BaseBdev2", 00:19:11.059 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:11.059 "is_configured": true, 00:19:11.059 "data_offset": 2048, 00:19:11.059 "data_size": 63488 00:19:11.059 }, 00:19:11.059 { 00:19:11.059 "name": "BaseBdev3", 00:19:11.059 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:11.059 "is_configured": true, 00:19:11.059 "data_offset": 2048, 00:19:11.059 "data_size": 63488 00:19:11.059 } 00:19:11.059 ] 00:19:11.059 }' 00:19:11.059 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.059 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.319 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.578 "name": "raid_bdev1", 00:19:11.578 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:11.578 "strip_size_kb": 64, 00:19:11.578 "state": "online", 00:19:11.578 "raid_level": "raid5f", 00:19:11.578 "superblock": true, 00:19:11.578 "num_base_bdevs": 3, 00:19:11.578 "num_base_bdevs_discovered": 2, 00:19:11.578 "num_base_bdevs_operational": 2, 00:19:11.578 "base_bdevs_list": [ 00:19:11.578 { 00:19:11.578 "name": null, 00:19:11.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.578 "is_configured": false, 00:19:11.578 "data_offset": 0, 00:19:11.578 "data_size": 63488 00:19:11.578 }, 00:19:11.578 { 00:19:11.578 "name": "BaseBdev2", 00:19:11.578 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:11.578 "is_configured": true, 00:19:11.578 "data_offset": 2048, 00:19:11.578 "data_size": 63488 00:19:11.578 }, 00:19:11.578 { 00:19:11.578 "name": "BaseBdev3", 00:19:11.578 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:11.578 "is_configured": true, 00:19:11.578 "data_offset": 2048, 00:19:11.578 "data_size": 63488 00:19:11.578 } 00:19:11.578 ] 00:19:11.578 }' 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.578 [2024-11-20 07:15:53.700943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.578 [2024-11-20 07:15:53.719814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.578 07:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:11.578 [2024-11-20 07:15:53.729078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.516 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.775 "name": "raid_bdev1", 00:19:12.775 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:12.775 "strip_size_kb": 64, 00:19:12.775 "state": "online", 00:19:12.775 "raid_level": "raid5f", 00:19:12.775 "superblock": true, 00:19:12.775 "num_base_bdevs": 3, 00:19:12.775 "num_base_bdevs_discovered": 3, 00:19:12.775 "num_base_bdevs_operational": 3, 00:19:12.775 "process": { 00:19:12.775 "type": "rebuild", 00:19:12.775 "target": "spare", 00:19:12.775 "progress": { 00:19:12.775 "blocks": 18432, 00:19:12.775 "percent": 14 00:19:12.775 } 00:19:12.775 }, 00:19:12.775 "base_bdevs_list": [ 00:19:12.775 { 00:19:12.775 "name": "spare", 00:19:12.775 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 }, 00:19:12.775 { 00:19:12.775 "name": "BaseBdev2", 00:19:12.775 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 }, 00:19:12.775 { 00:19:12.775 "name": "BaseBdev3", 00:19:12.775 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 } 00:19:12.775 ] 00:19:12.775 }' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:12.775 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=590 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.775 "name": "raid_bdev1", 00:19:12.775 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:12.775 "strip_size_kb": 64, 00:19:12.775 "state": "online", 00:19:12.775 "raid_level": "raid5f", 00:19:12.775 "superblock": true, 00:19:12.775 "num_base_bdevs": 3, 00:19:12.775 "num_base_bdevs_discovered": 3, 00:19:12.775 "num_base_bdevs_operational": 3, 00:19:12.775 "process": { 00:19:12.775 "type": "rebuild", 00:19:12.775 "target": "spare", 00:19:12.775 "progress": { 00:19:12.775 "blocks": 22528, 00:19:12.775 "percent": 17 00:19:12.775 } 00:19:12.775 }, 00:19:12.775 "base_bdevs_list": [ 00:19:12.775 { 00:19:12.775 "name": "spare", 00:19:12.775 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 }, 00:19:12.775 { 00:19:12.775 "name": "BaseBdev2", 00:19:12.775 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 }, 00:19:12.775 { 00:19:12.775 "name": "BaseBdev3", 00:19:12.775 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:12.775 "is_configured": true, 00:19:12.775 "data_offset": 2048, 00:19:12.775 "data_size": 63488 00:19:12.775 } 00:19:12.775 ] 00:19:12.775 }' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.775 07:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.034 07:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.034 07:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.974 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.975 "name": "raid_bdev1", 00:19:13.975 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:13.975 "strip_size_kb": 64, 00:19:13.975 "state": "online", 00:19:13.975 "raid_level": "raid5f", 00:19:13.975 "superblock": true, 00:19:13.975 "num_base_bdevs": 3, 00:19:13.975 "num_base_bdevs_discovered": 3, 00:19:13.975 "num_base_bdevs_operational": 3, 00:19:13.975 "process": { 00:19:13.975 "type": "rebuild", 00:19:13.975 "target": "spare", 00:19:13.975 "progress": { 00:19:13.975 "blocks": 47104, 00:19:13.975 "percent": 37 00:19:13.975 } 00:19:13.975 }, 00:19:13.975 "base_bdevs_list": [ 00:19:13.975 { 00:19:13.975 "name": "spare", 00:19:13.975 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:13.975 "is_configured": true, 00:19:13.975 "data_offset": 2048, 00:19:13.975 "data_size": 63488 00:19:13.975 }, 00:19:13.975 { 00:19:13.975 "name": "BaseBdev2", 00:19:13.975 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:13.975 "is_configured": true, 00:19:13.975 "data_offset": 2048, 00:19:13.975 "data_size": 63488 00:19:13.975 }, 00:19:13.975 { 00:19:13.975 "name": "BaseBdev3", 00:19:13.975 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:13.975 "is_configured": true, 00:19:13.975 "data_offset": 2048, 00:19:13.975 "data_size": 63488 00:19:13.975 } 00:19:13.975 ] 00:19:13.975 }' 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.975 07:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.367 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.367 "name": "raid_bdev1", 00:19:15.367 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:15.367 "strip_size_kb": 64, 00:19:15.367 "state": "online", 00:19:15.367 "raid_level": "raid5f", 00:19:15.367 "superblock": true, 00:19:15.367 "num_base_bdevs": 3, 00:19:15.367 "num_base_bdevs_discovered": 3, 00:19:15.368 "num_base_bdevs_operational": 3, 00:19:15.368 "process": { 00:19:15.368 "type": "rebuild", 00:19:15.368 "target": "spare", 00:19:15.368 "progress": { 00:19:15.368 "blocks": 69632, 00:19:15.368 "percent": 54 00:19:15.368 } 00:19:15.368 }, 00:19:15.368 "base_bdevs_list": [ 00:19:15.368 { 00:19:15.368 "name": "spare", 00:19:15.368 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:15.368 "is_configured": true, 00:19:15.368 "data_offset": 2048, 00:19:15.368 "data_size": 63488 00:19:15.368 }, 00:19:15.368 { 00:19:15.368 "name": "BaseBdev2", 00:19:15.368 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:15.368 "is_configured": true, 00:19:15.368 "data_offset": 2048, 00:19:15.368 "data_size": 63488 00:19:15.368 }, 00:19:15.368 { 00:19:15.368 "name": "BaseBdev3", 00:19:15.368 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:15.368 "is_configured": true, 00:19:15.368 "data_offset": 2048, 00:19:15.368 "data_size": 63488 00:19:15.368 } 00:19:15.368 ] 00:19:15.368 }' 00:19:15.368 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.368 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.369 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.369 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.369 07:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.313 "name": "raid_bdev1", 00:19:16.313 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:16.313 "strip_size_kb": 64, 00:19:16.313 "state": "online", 00:19:16.313 "raid_level": "raid5f", 00:19:16.313 "superblock": true, 00:19:16.313 "num_base_bdevs": 3, 00:19:16.313 "num_base_bdevs_discovered": 3, 00:19:16.313 "num_base_bdevs_operational": 3, 00:19:16.313 "process": { 00:19:16.313 "type": "rebuild", 00:19:16.313 "target": "spare", 00:19:16.313 "progress": { 00:19:16.313 "blocks": 92160, 00:19:16.313 "percent": 72 00:19:16.313 } 00:19:16.313 }, 00:19:16.313 "base_bdevs_list": [ 00:19:16.313 { 00:19:16.313 "name": "spare", 00:19:16.313 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:16.313 "is_configured": true, 00:19:16.313 "data_offset": 2048, 00:19:16.313 "data_size": 63488 00:19:16.313 }, 00:19:16.313 { 00:19:16.313 "name": "BaseBdev2", 00:19:16.313 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:16.313 "is_configured": true, 00:19:16.313 "data_offset": 2048, 00:19:16.313 "data_size": 63488 00:19:16.313 }, 00:19:16.313 { 00:19:16.313 "name": "BaseBdev3", 00:19:16.313 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:16.313 "is_configured": true, 00:19:16.313 "data_offset": 2048, 00:19:16.313 "data_size": 63488 00:19:16.313 } 00:19:16.313 ] 00:19:16.313 }' 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.313 07:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.270 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.529 "name": "raid_bdev1", 00:19:17.529 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:17.529 "strip_size_kb": 64, 00:19:17.529 "state": "online", 00:19:17.529 "raid_level": "raid5f", 00:19:17.529 "superblock": true, 00:19:17.529 "num_base_bdevs": 3, 00:19:17.529 "num_base_bdevs_discovered": 3, 00:19:17.529 "num_base_bdevs_operational": 3, 00:19:17.529 "process": { 00:19:17.529 "type": "rebuild", 00:19:17.529 "target": "spare", 00:19:17.529 "progress": { 00:19:17.529 "blocks": 116736, 00:19:17.529 "percent": 91 00:19:17.529 } 00:19:17.529 }, 00:19:17.529 "base_bdevs_list": [ 00:19:17.529 { 00:19:17.529 "name": "spare", 00:19:17.529 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:17.529 "is_configured": true, 00:19:17.529 "data_offset": 2048, 00:19:17.529 "data_size": 63488 00:19:17.529 }, 00:19:17.529 { 00:19:17.529 "name": "BaseBdev2", 00:19:17.529 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:17.529 "is_configured": true, 00:19:17.529 "data_offset": 2048, 00:19:17.529 "data_size": 63488 00:19:17.529 }, 00:19:17.529 { 00:19:17.529 "name": "BaseBdev3", 00:19:17.529 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:17.529 "is_configured": true, 00:19:17.529 "data_offset": 2048, 00:19:17.529 "data_size": 63488 00:19:17.529 } 00:19:17.529 ] 00:19:17.529 }' 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.529 07:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.786 [2024-11-20 07:15:59.990199] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.786 [2024-11-20 07:15:59.990303] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.786 [2024-11-20 07:15:59.990520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.724 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.724 "name": "raid_bdev1", 00:19:18.724 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:18.724 "strip_size_kb": 64, 00:19:18.724 "state": "online", 00:19:18.724 "raid_level": "raid5f", 00:19:18.724 "superblock": true, 00:19:18.724 "num_base_bdevs": 3, 00:19:18.724 "num_base_bdevs_discovered": 3, 00:19:18.724 "num_base_bdevs_operational": 3, 00:19:18.724 "base_bdevs_list": [ 00:19:18.724 { 00:19:18.724 "name": "spare", 00:19:18.724 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:18.724 "is_configured": true, 00:19:18.724 "data_offset": 2048, 00:19:18.724 "data_size": 63488 00:19:18.724 }, 00:19:18.724 { 00:19:18.724 "name": "BaseBdev2", 00:19:18.724 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:18.724 "is_configured": true, 00:19:18.724 "data_offset": 2048, 00:19:18.724 "data_size": 63488 00:19:18.724 }, 00:19:18.724 { 00:19:18.724 "name": "BaseBdev3", 00:19:18.724 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:18.724 "is_configured": true, 00:19:18.724 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 } 00:19:18.725 ] 00:19:18.725 }' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.725 "name": "raid_bdev1", 00:19:18.725 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:18.725 "strip_size_kb": 64, 00:19:18.725 "state": "online", 00:19:18.725 "raid_level": "raid5f", 00:19:18.725 "superblock": true, 00:19:18.725 "num_base_bdevs": 3, 00:19:18.725 "num_base_bdevs_discovered": 3, 00:19:18.725 "num_base_bdevs_operational": 3, 00:19:18.725 "base_bdevs_list": [ 00:19:18.725 { 00:19:18.725 "name": "spare", 00:19:18.725 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 }, 00:19:18.725 { 00:19:18.725 "name": "BaseBdev2", 00:19:18.725 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 }, 00:19:18.725 { 00:19:18.725 "name": "BaseBdev3", 00:19:18.725 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 } 00:19:18.725 ] 00:19:18.725 }' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.725 "name": "raid_bdev1", 00:19:18.725 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:18.725 "strip_size_kb": 64, 00:19:18.725 "state": "online", 00:19:18.725 "raid_level": "raid5f", 00:19:18.725 "superblock": true, 00:19:18.725 "num_base_bdevs": 3, 00:19:18.725 "num_base_bdevs_discovered": 3, 00:19:18.725 "num_base_bdevs_operational": 3, 00:19:18.725 "base_bdevs_list": [ 00:19:18.725 { 00:19:18.725 "name": "spare", 00:19:18.725 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 }, 00:19:18.725 { 00:19:18.725 "name": "BaseBdev2", 00:19:18.725 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 }, 00:19:18.725 { 00:19:18.725 "name": "BaseBdev3", 00:19:18.725 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:18.725 "is_configured": true, 00:19:18.725 "data_offset": 2048, 00:19:18.725 "data_size": 63488 00:19:18.725 } 00:19:18.725 ] 00:19:18.725 }' 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.725 07:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.294 [2024-11-20 07:16:01.377011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.294 [2024-11-20 07:16:01.377122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.294 [2024-11-20 07:16:01.377266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.294 [2024-11-20 07:16:01.377428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.294 [2024-11-20 07:16:01.377508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.294 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:19.553 /dev/nbd0 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.553 1+0 records in 00:19:19.553 1+0 records out 00:19:19.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629368 s, 6.5 MB/s 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.553 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:19.811 /dev/nbd1 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.811 07:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.811 1+0 records in 00:19:19.811 1+0 records out 00:19:19.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475024 s, 8.6 MB/s 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.811 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.812 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.087 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.379 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.639 [2024-11-20 07:16:02.766321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.639 [2024-11-20 07:16:02.766469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.639 [2024-11-20 07:16:02.766523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:20.639 [2024-11-20 07:16:02.766568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.639 [2024-11-20 07:16:02.769439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.639 [2024-11-20 07:16:02.769547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.639 [2024-11-20 07:16:02.769704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.639 [2024-11-20 07:16:02.769816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.639 [2024-11-20 07:16:02.770047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.639 [2024-11-20 07:16:02.770224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.639 spare 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.639 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.639 [2024-11-20 07:16:02.870218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:20.639 [2024-11-20 07:16:02.870399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:20.639 [2024-11-20 07:16:02.870845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:20.639 [2024-11-20 07:16:02.878253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:20.640 [2024-11-20 07:16:02.878356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:20.640 [2024-11-20 07:16:02.878742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.640 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.899 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.899 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.899 "name": "raid_bdev1", 00:19:20.899 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:20.899 "strip_size_kb": 64, 00:19:20.899 "state": "online", 00:19:20.899 "raid_level": "raid5f", 00:19:20.899 "superblock": true, 00:19:20.899 "num_base_bdevs": 3, 00:19:20.899 "num_base_bdevs_discovered": 3, 00:19:20.899 "num_base_bdevs_operational": 3, 00:19:20.899 "base_bdevs_list": [ 00:19:20.899 { 00:19:20.899 "name": "spare", 00:19:20.899 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:20.899 "is_configured": true, 00:19:20.899 "data_offset": 2048, 00:19:20.899 "data_size": 63488 00:19:20.899 }, 00:19:20.899 { 00:19:20.899 "name": "BaseBdev2", 00:19:20.899 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:20.899 "is_configured": true, 00:19:20.899 "data_offset": 2048, 00:19:20.899 "data_size": 63488 00:19:20.899 }, 00:19:20.899 { 00:19:20.899 "name": "BaseBdev3", 00:19:20.899 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:20.899 "is_configured": true, 00:19:20.899 "data_offset": 2048, 00:19:20.899 "data_size": 63488 00:19:20.899 } 00:19:20.899 ] 00:19:20.899 }' 00:19:20.899 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.899 07:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.159 "name": "raid_bdev1", 00:19:21.159 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:21.159 "strip_size_kb": 64, 00:19:21.159 "state": "online", 00:19:21.159 "raid_level": "raid5f", 00:19:21.159 "superblock": true, 00:19:21.159 "num_base_bdevs": 3, 00:19:21.159 "num_base_bdevs_discovered": 3, 00:19:21.159 "num_base_bdevs_operational": 3, 00:19:21.159 "base_bdevs_list": [ 00:19:21.159 { 00:19:21.159 "name": "spare", 00:19:21.159 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 }, 00:19:21.159 { 00:19:21.159 "name": "BaseBdev2", 00:19:21.159 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 }, 00:19:21.159 { 00:19:21.159 "name": "BaseBdev3", 00:19:21.159 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 } 00:19:21.159 ] 00:19:21.159 }' 00:19:21.159 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.417 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.418 [2024-11-20 07:16:03.566176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.418 "name": "raid_bdev1", 00:19:21.418 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:21.418 "strip_size_kb": 64, 00:19:21.418 "state": "online", 00:19:21.418 "raid_level": "raid5f", 00:19:21.418 "superblock": true, 00:19:21.418 "num_base_bdevs": 3, 00:19:21.418 "num_base_bdevs_discovered": 2, 00:19:21.418 "num_base_bdevs_operational": 2, 00:19:21.418 "base_bdevs_list": [ 00:19:21.418 { 00:19:21.418 "name": null, 00:19:21.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.418 "is_configured": false, 00:19:21.418 "data_offset": 0, 00:19:21.418 "data_size": 63488 00:19:21.418 }, 00:19:21.418 { 00:19:21.418 "name": "BaseBdev2", 00:19:21.418 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:21.418 "is_configured": true, 00:19:21.418 "data_offset": 2048, 00:19:21.418 "data_size": 63488 00:19:21.418 }, 00:19:21.418 { 00:19:21.418 "name": "BaseBdev3", 00:19:21.418 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:21.418 "is_configured": true, 00:19:21.418 "data_offset": 2048, 00:19:21.418 "data_size": 63488 00:19:21.418 } 00:19:21.418 ] 00:19:21.418 }' 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.418 07:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.985 07:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.985 07:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.985 07:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.985 [2024-11-20 07:16:04.021508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.985 [2024-11-20 07:16:04.021812] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.985 [2024-11-20 07:16:04.021894] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.985 [2024-11-20 07:16:04.021982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.985 [2024-11-20 07:16:04.040988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:21.985 07:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.985 07:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:21.985 [2024-11-20 07:16:04.050357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.921 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.922 "name": "raid_bdev1", 00:19:22.922 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:22.922 "strip_size_kb": 64, 00:19:22.922 "state": "online", 00:19:22.922 "raid_level": "raid5f", 00:19:22.922 "superblock": true, 00:19:22.922 "num_base_bdevs": 3, 00:19:22.922 "num_base_bdevs_discovered": 3, 00:19:22.922 "num_base_bdevs_operational": 3, 00:19:22.922 "process": { 00:19:22.922 "type": "rebuild", 00:19:22.922 "target": "spare", 00:19:22.922 "progress": { 00:19:22.922 "blocks": 18432, 00:19:22.922 "percent": 14 00:19:22.922 } 00:19:22.922 }, 00:19:22.922 "base_bdevs_list": [ 00:19:22.922 { 00:19:22.922 "name": "spare", 00:19:22.922 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:22.922 "is_configured": true, 00:19:22.922 "data_offset": 2048, 00:19:22.922 "data_size": 63488 00:19:22.922 }, 00:19:22.922 { 00:19:22.922 "name": "BaseBdev2", 00:19:22.922 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:22.922 "is_configured": true, 00:19:22.922 "data_offset": 2048, 00:19:22.922 "data_size": 63488 00:19:22.922 }, 00:19:22.922 { 00:19:22.922 "name": "BaseBdev3", 00:19:22.922 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:22.922 "is_configured": true, 00:19:22.922 "data_offset": 2048, 00:19:22.922 "data_size": 63488 00:19:22.922 } 00:19:22.922 ] 00:19:22.922 }' 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.922 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.922 [2024-11-20 07:16:05.162952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.180 [2024-11-20 07:16:05.262907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:23.180 [2024-11-20 07:16:05.263019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.180 [2024-11-20 07:16:05.263041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.180 [2024-11-20 07:16:05.263053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.180 "name": "raid_bdev1", 00:19:23.180 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:23.180 "strip_size_kb": 64, 00:19:23.180 "state": "online", 00:19:23.180 "raid_level": "raid5f", 00:19:23.180 "superblock": true, 00:19:23.180 "num_base_bdevs": 3, 00:19:23.180 "num_base_bdevs_discovered": 2, 00:19:23.180 "num_base_bdevs_operational": 2, 00:19:23.180 "base_bdevs_list": [ 00:19:23.180 { 00:19:23.180 "name": null, 00:19:23.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.180 "is_configured": false, 00:19:23.180 "data_offset": 0, 00:19:23.180 "data_size": 63488 00:19:23.180 }, 00:19:23.180 { 00:19:23.180 "name": "BaseBdev2", 00:19:23.180 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:23.180 "is_configured": true, 00:19:23.180 "data_offset": 2048, 00:19:23.180 "data_size": 63488 00:19:23.180 }, 00:19:23.180 { 00:19:23.180 "name": "BaseBdev3", 00:19:23.180 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:23.180 "is_configured": true, 00:19:23.180 "data_offset": 2048, 00:19:23.180 "data_size": 63488 00:19:23.180 } 00:19:23.180 ] 00:19:23.180 }' 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.180 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.745 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.745 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.745 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.745 [2024-11-20 07:16:05.797493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.745 [2024-11-20 07:16:05.797577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.745 [2024-11-20 07:16:05.797610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:23.745 [2024-11-20 07:16:05.797632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.745 [2024-11-20 07:16:05.798244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.745 [2024-11-20 07:16:05.798286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.745 [2024-11-20 07:16:05.798430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:23.745 [2024-11-20 07:16:05.798459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.745 [2024-11-20 07:16:05.798471] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:23.745 [2024-11-20 07:16:05.798507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.745 [2024-11-20 07:16:05.818770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:23.746 spare 00:19:23.746 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.746 07:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:23.746 [2024-11-20 07:16:05.828399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.680 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.680 "name": "raid_bdev1", 00:19:24.680 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:24.680 "strip_size_kb": 64, 00:19:24.680 "state": "online", 00:19:24.680 "raid_level": "raid5f", 00:19:24.680 "superblock": true, 00:19:24.680 "num_base_bdevs": 3, 00:19:24.680 "num_base_bdevs_discovered": 3, 00:19:24.680 "num_base_bdevs_operational": 3, 00:19:24.680 "process": { 00:19:24.680 "type": "rebuild", 00:19:24.680 "target": "spare", 00:19:24.680 "progress": { 00:19:24.680 "blocks": 18432, 00:19:24.680 "percent": 14 00:19:24.680 } 00:19:24.680 }, 00:19:24.680 "base_bdevs_list": [ 00:19:24.680 { 00:19:24.680 "name": "spare", 00:19:24.680 "uuid": "535226d3-c1bd-5510-93ec-44c47c5efc08", 00:19:24.680 "is_configured": true, 00:19:24.680 "data_offset": 2048, 00:19:24.681 "data_size": 63488 00:19:24.681 }, 00:19:24.681 { 00:19:24.681 "name": "BaseBdev2", 00:19:24.681 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:24.681 "is_configured": true, 00:19:24.681 "data_offset": 2048, 00:19:24.681 "data_size": 63488 00:19:24.681 }, 00:19:24.681 { 00:19:24.681 "name": "BaseBdev3", 00:19:24.681 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:24.681 "is_configured": true, 00:19:24.681 "data_offset": 2048, 00:19:24.681 "data_size": 63488 00:19:24.681 } 00:19:24.681 ] 00:19:24.681 }' 00:19:24.681 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.681 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.681 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.939 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.939 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:24.939 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.939 07:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.939 [2024-11-20 07:16:06.968861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.939 [2024-11-20 07:16:07.040989] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.939 [2024-11-20 07:16:07.041085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.939 [2024-11-20 07:16:07.041110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.939 [2024-11-20 07:16:07.041120] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.939 "name": "raid_bdev1", 00:19:24.939 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:24.939 "strip_size_kb": 64, 00:19:24.939 "state": "online", 00:19:24.939 "raid_level": "raid5f", 00:19:24.939 "superblock": true, 00:19:24.939 "num_base_bdevs": 3, 00:19:24.939 "num_base_bdevs_discovered": 2, 00:19:24.939 "num_base_bdevs_operational": 2, 00:19:24.939 "base_bdevs_list": [ 00:19:24.939 { 00:19:24.939 "name": null, 00:19:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.939 "is_configured": false, 00:19:24.939 "data_offset": 0, 00:19:24.939 "data_size": 63488 00:19:24.939 }, 00:19:24.939 { 00:19:24.939 "name": "BaseBdev2", 00:19:24.939 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:24.939 "is_configured": true, 00:19:24.939 "data_offset": 2048, 00:19:24.939 "data_size": 63488 00:19:24.939 }, 00:19:24.939 { 00:19:24.939 "name": "BaseBdev3", 00:19:24.939 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:24.939 "is_configured": true, 00:19:24.939 "data_offset": 2048, 00:19:24.939 "data_size": 63488 00:19:24.939 } 00:19:24.939 ] 00:19:24.939 }' 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.939 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.508 "name": "raid_bdev1", 00:19:25.508 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:25.508 "strip_size_kb": 64, 00:19:25.508 "state": "online", 00:19:25.508 "raid_level": "raid5f", 00:19:25.508 "superblock": true, 00:19:25.508 "num_base_bdevs": 3, 00:19:25.508 "num_base_bdevs_discovered": 2, 00:19:25.508 "num_base_bdevs_operational": 2, 00:19:25.508 "base_bdevs_list": [ 00:19:25.508 { 00:19:25.508 "name": null, 00:19:25.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.508 "is_configured": false, 00:19:25.508 "data_offset": 0, 00:19:25.508 "data_size": 63488 00:19:25.508 }, 00:19:25.508 { 00:19:25.508 "name": "BaseBdev2", 00:19:25.508 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:25.508 "is_configured": true, 00:19:25.508 "data_offset": 2048, 00:19:25.508 "data_size": 63488 00:19:25.508 }, 00:19:25.508 { 00:19:25.508 "name": "BaseBdev3", 00:19:25.508 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:25.508 "is_configured": true, 00:19:25.508 "data_offset": 2048, 00:19:25.508 "data_size": 63488 00:19:25.508 } 00:19:25.508 ] 00:19:25.508 }' 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 [2024-11-20 07:16:07.714686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.508 [2024-11-20 07:16:07.714763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.508 [2024-11-20 07:16:07.714799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:25.508 [2024-11-20 07:16:07.714815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.508 [2024-11-20 07:16:07.715414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.508 [2024-11-20 07:16:07.715445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.508 [2024-11-20 07:16:07.715557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:25.508 [2024-11-20 07:16:07.715594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:25.508 [2024-11-20 07:16:07.715619] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:25.508 [2024-11-20 07:16:07.715633] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:25.508 BaseBdev1 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.508 07:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.882 "name": "raid_bdev1", 00:19:26.882 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:26.882 "strip_size_kb": 64, 00:19:26.882 "state": "online", 00:19:26.882 "raid_level": "raid5f", 00:19:26.882 "superblock": true, 00:19:26.882 "num_base_bdevs": 3, 00:19:26.882 "num_base_bdevs_discovered": 2, 00:19:26.882 "num_base_bdevs_operational": 2, 00:19:26.882 "base_bdevs_list": [ 00:19:26.882 { 00:19:26.882 "name": null, 00:19:26.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.882 "is_configured": false, 00:19:26.882 "data_offset": 0, 00:19:26.882 "data_size": 63488 00:19:26.882 }, 00:19:26.882 { 00:19:26.882 "name": "BaseBdev2", 00:19:26.882 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:26.882 "is_configured": true, 00:19:26.882 "data_offset": 2048, 00:19:26.882 "data_size": 63488 00:19:26.882 }, 00:19:26.882 { 00:19:26.882 "name": "BaseBdev3", 00:19:26.882 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:26.882 "is_configured": true, 00:19:26.882 "data_offset": 2048, 00:19:26.882 "data_size": 63488 00:19:26.882 } 00:19:26.882 ] 00:19:26.882 }' 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.882 07:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.167 "name": "raid_bdev1", 00:19:27.167 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:27.167 "strip_size_kb": 64, 00:19:27.167 "state": "online", 00:19:27.167 "raid_level": "raid5f", 00:19:27.167 "superblock": true, 00:19:27.167 "num_base_bdevs": 3, 00:19:27.167 "num_base_bdevs_discovered": 2, 00:19:27.167 "num_base_bdevs_operational": 2, 00:19:27.167 "base_bdevs_list": [ 00:19:27.167 { 00:19:27.167 "name": null, 00:19:27.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.167 "is_configured": false, 00:19:27.167 "data_offset": 0, 00:19:27.167 "data_size": 63488 00:19:27.167 }, 00:19:27.167 { 00:19:27.167 "name": "BaseBdev2", 00:19:27.167 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:27.167 "is_configured": true, 00:19:27.167 "data_offset": 2048, 00:19:27.167 "data_size": 63488 00:19:27.167 }, 00:19:27.167 { 00:19:27.167 "name": "BaseBdev3", 00:19:27.167 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:27.167 "is_configured": true, 00:19:27.167 "data_offset": 2048, 00:19:27.167 "data_size": 63488 00:19:27.167 } 00:19:27.167 ] 00:19:27.167 }' 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.167 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.168 [2024-11-20 07:16:09.344551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.168 [2024-11-20 07:16:09.344772] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:27.168 [2024-11-20 07:16:09.344820] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:27.168 request: 00:19:27.168 { 00:19:27.168 "base_bdev": "BaseBdev1", 00:19:27.168 "raid_bdev": "raid_bdev1", 00:19:27.168 "method": "bdev_raid_add_base_bdev", 00:19:27.168 "req_id": 1 00:19:27.168 } 00:19:27.168 Got JSON-RPC error response 00:19:27.168 response: 00:19:27.168 { 00:19:27.168 "code": -22, 00:19:27.168 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:27.168 } 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.168 07:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.140 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.398 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.398 "name": "raid_bdev1", 00:19:28.398 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:28.398 "strip_size_kb": 64, 00:19:28.398 "state": "online", 00:19:28.398 "raid_level": "raid5f", 00:19:28.398 "superblock": true, 00:19:28.398 "num_base_bdevs": 3, 00:19:28.398 "num_base_bdevs_discovered": 2, 00:19:28.398 "num_base_bdevs_operational": 2, 00:19:28.398 "base_bdevs_list": [ 00:19:28.398 { 00:19:28.398 "name": null, 00:19:28.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.398 "is_configured": false, 00:19:28.398 "data_offset": 0, 00:19:28.398 "data_size": 63488 00:19:28.398 }, 00:19:28.398 { 00:19:28.398 "name": "BaseBdev2", 00:19:28.398 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:28.398 "is_configured": true, 00:19:28.398 "data_offset": 2048, 00:19:28.398 "data_size": 63488 00:19:28.398 }, 00:19:28.398 { 00:19:28.398 "name": "BaseBdev3", 00:19:28.398 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:28.398 "is_configured": true, 00:19:28.398 "data_offset": 2048, 00:19:28.398 "data_size": 63488 00:19:28.398 } 00:19:28.398 ] 00:19:28.398 }' 00:19:28.398 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.398 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.655 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.656 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.656 "name": "raid_bdev1", 00:19:28.656 "uuid": "0687c162-d309-4e88-84ea-e608fea211ca", 00:19:28.656 "strip_size_kb": 64, 00:19:28.656 "state": "online", 00:19:28.656 "raid_level": "raid5f", 00:19:28.656 "superblock": true, 00:19:28.656 "num_base_bdevs": 3, 00:19:28.656 "num_base_bdevs_discovered": 2, 00:19:28.656 "num_base_bdevs_operational": 2, 00:19:28.656 "base_bdevs_list": [ 00:19:28.656 { 00:19:28.656 "name": null, 00:19:28.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.656 "is_configured": false, 00:19:28.656 "data_offset": 0, 00:19:28.656 "data_size": 63488 00:19:28.656 }, 00:19:28.656 { 00:19:28.656 "name": "BaseBdev2", 00:19:28.656 "uuid": "8db84b9f-0eec-5715-9455-186bf65ede16", 00:19:28.656 "is_configured": true, 00:19:28.656 "data_offset": 2048, 00:19:28.656 "data_size": 63488 00:19:28.656 }, 00:19:28.656 { 00:19:28.656 "name": "BaseBdev3", 00:19:28.656 "uuid": "5427ff7d-d30a-5885-b92f-5dabc120af1b", 00:19:28.656 "is_configured": true, 00:19:28.656 "data_offset": 2048, 00:19:28.656 "data_size": 63488 00:19:28.656 } 00:19:28.656 ] 00:19:28.656 }' 00:19:28.656 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82509 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82509 ']' 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82509 00:19:28.913 07:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82509 00:19:28.913 killing process with pid 82509 00:19:28.913 Received shutdown signal, test time was about 60.000000 seconds 00:19:28.913 00:19:28.913 Latency(us) 00:19:28.913 [2024-11-20T07:16:11.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.913 [2024-11-20T07:16:11.178Z] =================================================================================================================== 00:19:28.913 [2024-11-20T07:16:11.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82509' 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82509 00:19:28.913 07:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82509 00:19:28.913 [2024-11-20 07:16:11.038990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.913 [2024-11-20 07:16:11.039142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.913 [2024-11-20 07:16:11.039235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.913 [2024-11-20 07:16:11.039258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:29.478 [2024-11-20 07:16:11.503912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.909 07:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:30.909 00:19:30.909 real 0m24.292s 00:19:30.909 user 0m31.098s 00:19:30.909 sys 0m3.022s 00:19:30.909 07:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.909 07:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.909 ************************************ 00:19:30.909 END TEST raid5f_rebuild_test_sb 00:19:30.909 ************************************ 00:19:30.909 07:16:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:30.909 07:16:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:30.909 07:16:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:30.909 07:16:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.909 07:16:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.909 ************************************ 00:19:30.909 START TEST raid5f_state_function_test 00:19:30.909 ************************************ 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83269 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83269' 00:19:30.909 Process raid pid: 83269 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83269 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83269 ']' 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.909 07:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.909 [2024-11-20 07:16:12.996064] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:30.909 [2024-11-20 07:16:12.996198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.909 [2024-11-20 07:16:13.160454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.167 [2024-11-20 07:16:13.296185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.426 [2024-11-20 07:16:13.537674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.426 [2024-11-20 07:16:13.537748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.685 [2024-11-20 07:16:13.919847] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.685 [2024-11-20 07:16:13.919908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.685 [2024-11-20 07:16:13.919920] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.685 [2024-11-20 07:16:13.919932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.685 [2024-11-20 07:16:13.919940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:31.685 [2024-11-20 07:16:13.919949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:31.685 [2024-11-20 07:16:13.919956] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:31.685 [2024-11-20 07:16:13.919966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.685 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.943 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.943 "name": "Existed_Raid", 00:19:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.943 "strip_size_kb": 64, 00:19:31.943 "state": "configuring", 00:19:31.943 "raid_level": "raid5f", 00:19:31.943 "superblock": false, 00:19:31.943 "num_base_bdevs": 4, 00:19:31.943 "num_base_bdevs_discovered": 0, 00:19:31.943 "num_base_bdevs_operational": 4, 00:19:31.943 "base_bdevs_list": [ 00:19:31.943 { 00:19:31.943 "name": "BaseBdev1", 00:19:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.943 "is_configured": false, 00:19:31.943 "data_offset": 0, 00:19:31.943 "data_size": 0 00:19:31.943 }, 00:19:31.943 { 00:19:31.943 "name": "BaseBdev2", 00:19:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.943 "is_configured": false, 00:19:31.943 "data_offset": 0, 00:19:31.943 "data_size": 0 00:19:31.943 }, 00:19:31.943 { 00:19:31.943 "name": "BaseBdev3", 00:19:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.943 "is_configured": false, 00:19:31.943 "data_offset": 0, 00:19:31.943 "data_size": 0 00:19:31.943 }, 00:19:31.943 { 00:19:31.943 "name": "BaseBdev4", 00:19:31.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.943 "is_configured": false, 00:19:31.943 "data_offset": 0, 00:19:31.943 "data_size": 0 00:19:31.943 } 00:19:31.943 ] 00:19:31.943 }' 00:19:31.943 07:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.943 07:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 [2024-11-20 07:16:14.347125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.229 [2024-11-20 07:16:14.347177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 [2024-11-20 07:16:14.355121] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.229 [2024-11-20 07:16:14.355177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.229 [2024-11-20 07:16:14.355188] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.229 [2024-11-20 07:16:14.355199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.229 [2024-11-20 07:16:14.355207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.229 [2024-11-20 07:16:14.355217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.229 [2024-11-20 07:16:14.355224] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.229 [2024-11-20 07:16:14.355234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 [2024-11-20 07:16:14.405586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.229 BaseBdev1 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.229 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.229 [ 00:19:32.229 { 00:19:32.229 "name": "BaseBdev1", 00:19:32.229 "aliases": [ 00:19:32.229 "946b5ada-419c-46d3-8ecb-8594fd1a4e50" 00:19:32.229 ], 00:19:32.229 "product_name": "Malloc disk", 00:19:32.229 "block_size": 512, 00:19:32.229 "num_blocks": 65536, 00:19:32.229 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:32.229 "assigned_rate_limits": { 00:19:32.229 "rw_ios_per_sec": 0, 00:19:32.229 "rw_mbytes_per_sec": 0, 00:19:32.229 "r_mbytes_per_sec": 0, 00:19:32.229 "w_mbytes_per_sec": 0 00:19:32.229 }, 00:19:32.229 "claimed": true, 00:19:32.229 "claim_type": "exclusive_write", 00:19:32.229 "zoned": false, 00:19:32.229 "supported_io_types": { 00:19:32.229 "read": true, 00:19:32.229 "write": true, 00:19:32.229 "unmap": true, 00:19:32.229 "flush": true, 00:19:32.229 "reset": true, 00:19:32.229 "nvme_admin": false, 00:19:32.229 "nvme_io": false, 00:19:32.229 "nvme_io_md": false, 00:19:32.229 "write_zeroes": true, 00:19:32.229 "zcopy": true, 00:19:32.229 "get_zone_info": false, 00:19:32.229 "zone_management": false, 00:19:32.229 "zone_append": false, 00:19:32.229 "compare": false, 00:19:32.229 "compare_and_write": false, 00:19:32.229 "abort": true, 00:19:32.229 "seek_hole": false, 00:19:32.229 "seek_data": false, 00:19:32.229 "copy": true, 00:19:32.229 "nvme_iov_md": false 00:19:32.229 }, 00:19:32.229 "memory_domains": [ 00:19:32.229 { 00:19:32.229 "dma_device_id": "system", 00:19:32.230 "dma_device_type": 1 00:19:32.230 }, 00:19:32.230 { 00:19:32.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.230 "dma_device_type": 2 00:19:32.230 } 00:19:32.230 ], 00:19:32.230 "driver_specific": {} 00:19:32.230 } 00:19:32.230 ] 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.230 "name": "Existed_Raid", 00:19:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.230 "strip_size_kb": 64, 00:19:32.230 "state": "configuring", 00:19:32.230 "raid_level": "raid5f", 00:19:32.230 "superblock": false, 00:19:32.230 "num_base_bdevs": 4, 00:19:32.230 "num_base_bdevs_discovered": 1, 00:19:32.230 "num_base_bdevs_operational": 4, 00:19:32.230 "base_bdevs_list": [ 00:19:32.230 { 00:19:32.230 "name": "BaseBdev1", 00:19:32.230 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:32.230 "is_configured": true, 00:19:32.230 "data_offset": 0, 00:19:32.230 "data_size": 65536 00:19:32.230 }, 00:19:32.230 { 00:19:32.230 "name": "BaseBdev2", 00:19:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.230 "is_configured": false, 00:19:32.230 "data_offset": 0, 00:19:32.230 "data_size": 0 00:19:32.230 }, 00:19:32.230 { 00:19:32.230 "name": "BaseBdev3", 00:19:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.230 "is_configured": false, 00:19:32.230 "data_offset": 0, 00:19:32.230 "data_size": 0 00:19:32.230 }, 00:19:32.230 { 00:19:32.230 "name": "BaseBdev4", 00:19:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.230 "is_configured": false, 00:19:32.230 "data_offset": 0, 00:19:32.230 "data_size": 0 00:19:32.230 } 00:19:32.230 ] 00:19:32.230 }' 00:19:32.230 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.489 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.748 [2024-11-20 07:16:14.912942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.748 [2024-11-20 07:16:14.913009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.748 [2024-11-20 07:16:14.921006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.748 [2024-11-20 07:16:14.923109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.748 [2024-11-20 07:16:14.923162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.748 [2024-11-20 07:16:14.923174] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.748 [2024-11-20 07:16:14.923187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.748 [2024-11-20 07:16:14.923196] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.748 [2024-11-20 07:16:14.923206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.748 "name": "Existed_Raid", 00:19:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.748 "strip_size_kb": 64, 00:19:32.748 "state": "configuring", 00:19:32.748 "raid_level": "raid5f", 00:19:32.748 "superblock": false, 00:19:32.748 "num_base_bdevs": 4, 00:19:32.748 "num_base_bdevs_discovered": 1, 00:19:32.748 "num_base_bdevs_operational": 4, 00:19:32.748 "base_bdevs_list": [ 00:19:32.748 { 00:19:32.748 "name": "BaseBdev1", 00:19:32.748 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:32.748 "is_configured": true, 00:19:32.748 "data_offset": 0, 00:19:32.748 "data_size": 65536 00:19:32.748 }, 00:19:32.748 { 00:19:32.748 "name": "BaseBdev2", 00:19:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.748 "is_configured": false, 00:19:32.748 "data_offset": 0, 00:19:32.748 "data_size": 0 00:19:32.748 }, 00:19:32.748 { 00:19:32.748 "name": "BaseBdev3", 00:19:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.748 "is_configured": false, 00:19:32.748 "data_offset": 0, 00:19:32.748 "data_size": 0 00:19:32.748 }, 00:19:32.748 { 00:19:32.748 "name": "BaseBdev4", 00:19:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.748 "is_configured": false, 00:19:32.748 "data_offset": 0, 00:19:32.748 "data_size": 0 00:19:32.748 } 00:19:32.748 ] 00:19:32.748 }' 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.748 07:16:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.317 [2024-11-20 07:16:15.386264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.317 BaseBdev2 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.317 [ 00:19:33.317 { 00:19:33.317 "name": "BaseBdev2", 00:19:33.317 "aliases": [ 00:19:33.317 "b96737a5-cb8c-4ed9-97bb-0eec7847612c" 00:19:33.317 ], 00:19:33.317 "product_name": "Malloc disk", 00:19:33.317 "block_size": 512, 00:19:33.317 "num_blocks": 65536, 00:19:33.317 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:33.317 "assigned_rate_limits": { 00:19:33.317 "rw_ios_per_sec": 0, 00:19:33.317 "rw_mbytes_per_sec": 0, 00:19:33.317 "r_mbytes_per_sec": 0, 00:19:33.317 "w_mbytes_per_sec": 0 00:19:33.317 }, 00:19:33.317 "claimed": true, 00:19:33.317 "claim_type": "exclusive_write", 00:19:33.317 "zoned": false, 00:19:33.317 "supported_io_types": { 00:19:33.317 "read": true, 00:19:33.317 "write": true, 00:19:33.317 "unmap": true, 00:19:33.317 "flush": true, 00:19:33.317 "reset": true, 00:19:33.317 "nvme_admin": false, 00:19:33.317 "nvme_io": false, 00:19:33.317 "nvme_io_md": false, 00:19:33.317 "write_zeroes": true, 00:19:33.317 "zcopy": true, 00:19:33.317 "get_zone_info": false, 00:19:33.317 "zone_management": false, 00:19:33.317 "zone_append": false, 00:19:33.317 "compare": false, 00:19:33.317 "compare_and_write": false, 00:19:33.317 "abort": true, 00:19:33.317 "seek_hole": false, 00:19:33.317 "seek_data": false, 00:19:33.317 "copy": true, 00:19:33.317 "nvme_iov_md": false 00:19:33.317 }, 00:19:33.317 "memory_domains": [ 00:19:33.317 { 00:19:33.317 "dma_device_id": "system", 00:19:33.317 "dma_device_type": 1 00:19:33.317 }, 00:19:33.317 { 00:19:33.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.317 "dma_device_type": 2 00:19:33.317 } 00:19:33.317 ], 00:19:33.317 "driver_specific": {} 00:19:33.317 } 00:19:33.317 ] 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.317 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.318 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.318 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.318 "name": "Existed_Raid", 00:19:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.318 "strip_size_kb": 64, 00:19:33.318 "state": "configuring", 00:19:33.318 "raid_level": "raid5f", 00:19:33.318 "superblock": false, 00:19:33.318 "num_base_bdevs": 4, 00:19:33.318 "num_base_bdevs_discovered": 2, 00:19:33.318 "num_base_bdevs_operational": 4, 00:19:33.318 "base_bdevs_list": [ 00:19:33.318 { 00:19:33.318 "name": "BaseBdev1", 00:19:33.318 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:33.318 "is_configured": true, 00:19:33.318 "data_offset": 0, 00:19:33.318 "data_size": 65536 00:19:33.318 }, 00:19:33.318 { 00:19:33.318 "name": "BaseBdev2", 00:19:33.318 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:33.318 "is_configured": true, 00:19:33.318 "data_offset": 0, 00:19:33.318 "data_size": 65536 00:19:33.318 }, 00:19:33.318 { 00:19:33.318 "name": "BaseBdev3", 00:19:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.318 "is_configured": false, 00:19:33.318 "data_offset": 0, 00:19:33.318 "data_size": 0 00:19:33.318 }, 00:19:33.318 { 00:19:33.318 "name": "BaseBdev4", 00:19:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.318 "is_configured": false, 00:19:33.318 "data_offset": 0, 00:19:33.318 "data_size": 0 00:19:33.318 } 00:19:33.318 ] 00:19:33.318 }' 00:19:33.318 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.318 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 [2024-11-20 07:16:15.967691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.885 BaseBdev3 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 [ 00:19:33.885 { 00:19:33.885 "name": "BaseBdev3", 00:19:33.885 "aliases": [ 00:19:33.885 "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f" 00:19:33.885 ], 00:19:33.885 "product_name": "Malloc disk", 00:19:33.885 "block_size": 512, 00:19:33.885 "num_blocks": 65536, 00:19:33.885 "uuid": "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f", 00:19:33.885 "assigned_rate_limits": { 00:19:33.885 "rw_ios_per_sec": 0, 00:19:33.885 "rw_mbytes_per_sec": 0, 00:19:33.885 "r_mbytes_per_sec": 0, 00:19:33.885 "w_mbytes_per_sec": 0 00:19:33.885 }, 00:19:33.885 "claimed": true, 00:19:33.885 "claim_type": "exclusive_write", 00:19:33.885 "zoned": false, 00:19:33.885 "supported_io_types": { 00:19:33.885 "read": true, 00:19:33.885 "write": true, 00:19:33.885 "unmap": true, 00:19:33.885 "flush": true, 00:19:33.885 "reset": true, 00:19:33.885 "nvme_admin": false, 00:19:33.885 "nvme_io": false, 00:19:33.885 "nvme_io_md": false, 00:19:33.885 "write_zeroes": true, 00:19:33.885 "zcopy": true, 00:19:33.885 "get_zone_info": false, 00:19:33.885 "zone_management": false, 00:19:33.885 "zone_append": false, 00:19:33.885 "compare": false, 00:19:33.885 "compare_and_write": false, 00:19:33.885 "abort": true, 00:19:33.885 "seek_hole": false, 00:19:33.885 "seek_data": false, 00:19:33.885 "copy": true, 00:19:33.885 "nvme_iov_md": false 00:19:33.885 }, 00:19:33.885 "memory_domains": [ 00:19:33.885 { 00:19:33.885 "dma_device_id": "system", 00:19:33.885 "dma_device_type": 1 00:19:33.885 }, 00:19:33.885 { 00:19:33.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.885 "dma_device_type": 2 00:19:33.885 } 00:19:33.885 ], 00:19:33.885 "driver_specific": {} 00:19:33.885 } 00:19:33.885 ] 00:19:33.885 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.886 07:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.886 "name": "Existed_Raid", 00:19:33.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.886 "strip_size_kb": 64, 00:19:33.886 "state": "configuring", 00:19:33.886 "raid_level": "raid5f", 00:19:33.886 "superblock": false, 00:19:33.886 "num_base_bdevs": 4, 00:19:33.886 "num_base_bdevs_discovered": 3, 00:19:33.886 "num_base_bdevs_operational": 4, 00:19:33.886 "base_bdevs_list": [ 00:19:33.886 { 00:19:33.886 "name": "BaseBdev1", 00:19:33.886 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:33.886 "is_configured": true, 00:19:33.886 "data_offset": 0, 00:19:33.886 "data_size": 65536 00:19:33.886 }, 00:19:33.886 { 00:19:33.886 "name": "BaseBdev2", 00:19:33.886 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:33.886 "is_configured": true, 00:19:33.886 "data_offset": 0, 00:19:33.886 "data_size": 65536 00:19:33.886 }, 00:19:33.886 { 00:19:33.886 "name": "BaseBdev3", 00:19:33.886 "uuid": "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f", 00:19:33.886 "is_configured": true, 00:19:33.886 "data_offset": 0, 00:19:33.886 "data_size": 65536 00:19:33.886 }, 00:19:33.886 { 00:19:33.886 "name": "BaseBdev4", 00:19:33.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.886 "is_configured": false, 00:19:33.886 "data_offset": 0, 00:19:33.886 "data_size": 0 00:19:33.886 } 00:19:33.886 ] 00:19:33.886 }' 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.886 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:34.145 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.145 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.406 [2024-11-20 07:16:16.438124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:34.406 [2024-11-20 07:16:16.438211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:34.406 [2024-11-20 07:16:16.438222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:34.406 [2024-11-20 07:16:16.438520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:34.406 [2024-11-20 07:16:16.446679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:34.406 [2024-11-20 07:16:16.446731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:34.406 [2024-11-20 07:16:16.447080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.406 BaseBdev4 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.406 [ 00:19:34.406 { 00:19:34.406 "name": "BaseBdev4", 00:19:34.406 "aliases": [ 00:19:34.406 "cb95e538-c616-469d-82e7-da643ae8978a" 00:19:34.406 ], 00:19:34.406 "product_name": "Malloc disk", 00:19:34.406 "block_size": 512, 00:19:34.406 "num_blocks": 65536, 00:19:34.406 "uuid": "cb95e538-c616-469d-82e7-da643ae8978a", 00:19:34.406 "assigned_rate_limits": { 00:19:34.406 "rw_ios_per_sec": 0, 00:19:34.406 "rw_mbytes_per_sec": 0, 00:19:34.406 "r_mbytes_per_sec": 0, 00:19:34.406 "w_mbytes_per_sec": 0 00:19:34.406 }, 00:19:34.406 "claimed": true, 00:19:34.406 "claim_type": "exclusive_write", 00:19:34.406 "zoned": false, 00:19:34.406 "supported_io_types": { 00:19:34.406 "read": true, 00:19:34.406 "write": true, 00:19:34.406 "unmap": true, 00:19:34.406 "flush": true, 00:19:34.406 "reset": true, 00:19:34.406 "nvme_admin": false, 00:19:34.406 "nvme_io": false, 00:19:34.406 "nvme_io_md": false, 00:19:34.406 "write_zeroes": true, 00:19:34.406 "zcopy": true, 00:19:34.406 "get_zone_info": false, 00:19:34.406 "zone_management": false, 00:19:34.406 "zone_append": false, 00:19:34.406 "compare": false, 00:19:34.406 "compare_and_write": false, 00:19:34.406 "abort": true, 00:19:34.406 "seek_hole": false, 00:19:34.406 "seek_data": false, 00:19:34.406 "copy": true, 00:19:34.406 "nvme_iov_md": false 00:19:34.406 }, 00:19:34.406 "memory_domains": [ 00:19:34.406 { 00:19:34.406 "dma_device_id": "system", 00:19:34.406 "dma_device_type": 1 00:19:34.406 }, 00:19:34.406 { 00:19:34.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.406 "dma_device_type": 2 00:19:34.406 } 00:19:34.406 ], 00:19:34.406 "driver_specific": {} 00:19:34.406 } 00:19:34.406 ] 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.406 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.407 "name": "Existed_Raid", 00:19:34.407 "uuid": "3c20efaa-e175-4610-857f-b6cfd9523892", 00:19:34.407 "strip_size_kb": 64, 00:19:34.407 "state": "online", 00:19:34.407 "raid_level": "raid5f", 00:19:34.407 "superblock": false, 00:19:34.407 "num_base_bdevs": 4, 00:19:34.407 "num_base_bdevs_discovered": 4, 00:19:34.407 "num_base_bdevs_operational": 4, 00:19:34.407 "base_bdevs_list": [ 00:19:34.407 { 00:19:34.407 "name": "BaseBdev1", 00:19:34.407 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:34.407 "is_configured": true, 00:19:34.407 "data_offset": 0, 00:19:34.407 "data_size": 65536 00:19:34.407 }, 00:19:34.407 { 00:19:34.407 "name": "BaseBdev2", 00:19:34.407 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:34.407 "is_configured": true, 00:19:34.407 "data_offset": 0, 00:19:34.407 "data_size": 65536 00:19:34.407 }, 00:19:34.407 { 00:19:34.407 "name": "BaseBdev3", 00:19:34.407 "uuid": "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f", 00:19:34.407 "is_configured": true, 00:19:34.407 "data_offset": 0, 00:19:34.407 "data_size": 65536 00:19:34.407 }, 00:19:34.407 { 00:19:34.407 "name": "BaseBdev4", 00:19:34.407 "uuid": "cb95e538-c616-469d-82e7-da643ae8978a", 00:19:34.407 "is_configured": true, 00:19:34.407 "data_offset": 0, 00:19:34.407 "data_size": 65536 00:19:34.407 } 00:19:34.407 ] 00:19:34.407 }' 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.407 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.974 [2024-11-20 07:16:16.948141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.974 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.974 "name": "Existed_Raid", 00:19:34.974 "aliases": [ 00:19:34.974 "3c20efaa-e175-4610-857f-b6cfd9523892" 00:19:34.974 ], 00:19:34.974 "product_name": "Raid Volume", 00:19:34.974 "block_size": 512, 00:19:34.974 "num_blocks": 196608, 00:19:34.974 "uuid": "3c20efaa-e175-4610-857f-b6cfd9523892", 00:19:34.974 "assigned_rate_limits": { 00:19:34.974 "rw_ios_per_sec": 0, 00:19:34.974 "rw_mbytes_per_sec": 0, 00:19:34.974 "r_mbytes_per_sec": 0, 00:19:34.974 "w_mbytes_per_sec": 0 00:19:34.974 }, 00:19:34.974 "claimed": false, 00:19:34.974 "zoned": false, 00:19:34.974 "supported_io_types": { 00:19:34.974 "read": true, 00:19:34.974 "write": true, 00:19:34.974 "unmap": false, 00:19:34.974 "flush": false, 00:19:34.974 "reset": true, 00:19:34.974 "nvme_admin": false, 00:19:34.974 "nvme_io": false, 00:19:34.974 "nvme_io_md": false, 00:19:34.974 "write_zeroes": true, 00:19:34.974 "zcopy": false, 00:19:34.974 "get_zone_info": false, 00:19:34.974 "zone_management": false, 00:19:34.974 "zone_append": false, 00:19:34.974 "compare": false, 00:19:34.974 "compare_and_write": false, 00:19:34.974 "abort": false, 00:19:34.974 "seek_hole": false, 00:19:34.974 "seek_data": false, 00:19:34.974 "copy": false, 00:19:34.974 "nvme_iov_md": false 00:19:34.974 }, 00:19:34.974 "driver_specific": { 00:19:34.974 "raid": { 00:19:34.974 "uuid": "3c20efaa-e175-4610-857f-b6cfd9523892", 00:19:34.974 "strip_size_kb": 64, 00:19:34.974 "state": "online", 00:19:34.974 "raid_level": "raid5f", 00:19:34.974 "superblock": false, 00:19:34.974 "num_base_bdevs": 4, 00:19:34.974 "num_base_bdevs_discovered": 4, 00:19:34.974 "num_base_bdevs_operational": 4, 00:19:34.974 "base_bdevs_list": [ 00:19:34.974 { 00:19:34.974 "name": "BaseBdev1", 00:19:34.974 "uuid": "946b5ada-419c-46d3-8ecb-8594fd1a4e50", 00:19:34.974 "is_configured": true, 00:19:34.974 "data_offset": 0, 00:19:34.974 "data_size": 65536 00:19:34.974 }, 00:19:34.974 { 00:19:34.974 "name": "BaseBdev2", 00:19:34.974 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:34.974 "is_configured": true, 00:19:34.974 "data_offset": 0, 00:19:34.974 "data_size": 65536 00:19:34.974 }, 00:19:34.975 { 00:19:34.975 "name": "BaseBdev3", 00:19:34.975 "uuid": "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f", 00:19:34.975 "is_configured": true, 00:19:34.975 "data_offset": 0, 00:19:34.975 "data_size": 65536 00:19:34.975 }, 00:19:34.975 { 00:19:34.975 "name": "BaseBdev4", 00:19:34.975 "uuid": "cb95e538-c616-469d-82e7-da643ae8978a", 00:19:34.975 "is_configured": true, 00:19:34.975 "data_offset": 0, 00:19:34.975 "data_size": 65536 00:19:34.975 } 00:19:34.975 ] 00:19:34.975 } 00:19:34.975 } 00:19:34.975 }' 00:19:34.975 07:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:34.975 BaseBdev2 00:19:34.975 BaseBdev3 00:19:34.975 BaseBdev4' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.234 [2024-11-20 07:16:17.287471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.234 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.235 "name": "Existed_Raid", 00:19:35.235 "uuid": "3c20efaa-e175-4610-857f-b6cfd9523892", 00:19:35.235 "strip_size_kb": 64, 00:19:35.235 "state": "online", 00:19:35.235 "raid_level": "raid5f", 00:19:35.235 "superblock": false, 00:19:35.235 "num_base_bdevs": 4, 00:19:35.235 "num_base_bdevs_discovered": 3, 00:19:35.235 "num_base_bdevs_operational": 3, 00:19:35.235 "base_bdevs_list": [ 00:19:35.235 { 00:19:35.235 "name": null, 00:19:35.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.235 "is_configured": false, 00:19:35.235 "data_offset": 0, 00:19:35.235 "data_size": 65536 00:19:35.235 }, 00:19:35.235 { 00:19:35.235 "name": "BaseBdev2", 00:19:35.235 "uuid": "b96737a5-cb8c-4ed9-97bb-0eec7847612c", 00:19:35.235 "is_configured": true, 00:19:35.235 "data_offset": 0, 00:19:35.235 "data_size": 65536 00:19:35.235 }, 00:19:35.235 { 00:19:35.235 "name": "BaseBdev3", 00:19:35.235 "uuid": "f0d4e8fd-dbb1-4c27-83a1-cf857cf30f9f", 00:19:35.235 "is_configured": true, 00:19:35.235 "data_offset": 0, 00:19:35.235 "data_size": 65536 00:19:35.235 }, 00:19:35.235 { 00:19:35.235 "name": "BaseBdev4", 00:19:35.235 "uuid": "cb95e538-c616-469d-82e7-da643ae8978a", 00:19:35.235 "is_configured": true, 00:19:35.235 "data_offset": 0, 00:19:35.235 "data_size": 65536 00:19:35.235 } 00:19:35.235 ] 00:19:35.235 }' 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.235 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:35.803 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.804 07:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:35.804 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.804 07:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.804 [2024-11-20 07:16:17.898365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.804 [2024-11-20 07:16:17.898482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.804 [2024-11-20 07:16:18.010416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.804 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.063 [2024-11-20 07:16:18.070398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.063 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.063 [2024-11-20 07:16:18.233975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:36.063 [2024-11-20 07:16:18.234041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 BaseBdev2 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 [ 00:19:36.324 { 00:19:36.324 "name": "BaseBdev2", 00:19:36.324 "aliases": [ 00:19:36.324 "0bd90283-a7de-4769-b7bc-6ee981030c43" 00:19:36.324 ], 00:19:36.324 "product_name": "Malloc disk", 00:19:36.324 "block_size": 512, 00:19:36.324 "num_blocks": 65536, 00:19:36.324 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:36.324 "assigned_rate_limits": { 00:19:36.324 "rw_ios_per_sec": 0, 00:19:36.324 "rw_mbytes_per_sec": 0, 00:19:36.324 "r_mbytes_per_sec": 0, 00:19:36.324 "w_mbytes_per_sec": 0 00:19:36.324 }, 00:19:36.324 "claimed": false, 00:19:36.324 "zoned": false, 00:19:36.324 "supported_io_types": { 00:19:36.324 "read": true, 00:19:36.324 "write": true, 00:19:36.324 "unmap": true, 00:19:36.324 "flush": true, 00:19:36.324 "reset": true, 00:19:36.324 "nvme_admin": false, 00:19:36.324 "nvme_io": false, 00:19:36.324 "nvme_io_md": false, 00:19:36.324 "write_zeroes": true, 00:19:36.324 "zcopy": true, 00:19:36.324 "get_zone_info": false, 00:19:36.324 "zone_management": false, 00:19:36.324 "zone_append": false, 00:19:36.324 "compare": false, 00:19:36.324 "compare_and_write": false, 00:19:36.324 "abort": true, 00:19:36.324 "seek_hole": false, 00:19:36.324 "seek_data": false, 00:19:36.324 "copy": true, 00:19:36.324 "nvme_iov_md": false 00:19:36.324 }, 00:19:36.324 "memory_domains": [ 00:19:36.324 { 00:19:36.324 "dma_device_id": "system", 00:19:36.324 "dma_device_type": 1 00:19:36.324 }, 00:19:36.324 { 00:19:36.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.324 "dma_device_type": 2 00:19:36.324 } 00:19:36.324 ], 00:19:36.324 "driver_specific": {} 00:19:36.324 } 00:19:36.324 ] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 BaseBdev3 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.324 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.325 [ 00:19:36.325 { 00:19:36.325 "name": "BaseBdev3", 00:19:36.325 "aliases": [ 00:19:36.325 "a6efdb66-713e-4fc9-a1a5-8a60bc404d19" 00:19:36.325 ], 00:19:36.325 "product_name": "Malloc disk", 00:19:36.325 "block_size": 512, 00:19:36.325 "num_blocks": 65536, 00:19:36.325 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:36.325 "assigned_rate_limits": { 00:19:36.325 "rw_ios_per_sec": 0, 00:19:36.325 "rw_mbytes_per_sec": 0, 00:19:36.325 "r_mbytes_per_sec": 0, 00:19:36.325 "w_mbytes_per_sec": 0 00:19:36.325 }, 00:19:36.325 "claimed": false, 00:19:36.325 "zoned": false, 00:19:36.325 "supported_io_types": { 00:19:36.325 "read": true, 00:19:36.325 "write": true, 00:19:36.325 "unmap": true, 00:19:36.325 "flush": true, 00:19:36.325 "reset": true, 00:19:36.325 "nvme_admin": false, 00:19:36.325 "nvme_io": false, 00:19:36.325 "nvme_io_md": false, 00:19:36.325 "write_zeroes": true, 00:19:36.325 "zcopy": true, 00:19:36.325 "get_zone_info": false, 00:19:36.325 "zone_management": false, 00:19:36.325 "zone_append": false, 00:19:36.325 "compare": false, 00:19:36.325 "compare_and_write": false, 00:19:36.325 "abort": true, 00:19:36.325 "seek_hole": false, 00:19:36.325 "seek_data": false, 00:19:36.325 "copy": true, 00:19:36.325 "nvme_iov_md": false 00:19:36.325 }, 00:19:36.325 "memory_domains": [ 00:19:36.325 { 00:19:36.325 "dma_device_id": "system", 00:19:36.325 "dma_device_type": 1 00:19:36.325 }, 00:19:36.325 { 00:19:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.325 "dma_device_type": 2 00:19:36.325 } 00:19:36.325 ], 00:19:36.325 "driver_specific": {} 00:19:36.325 } 00:19:36.325 ] 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.325 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.585 BaseBdev4 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.585 [ 00:19:36.585 { 00:19:36.585 "name": "BaseBdev4", 00:19:36.585 "aliases": [ 00:19:36.585 "436ed4f4-76d4-4465-9c00-b0539883e696" 00:19:36.585 ], 00:19:36.585 "product_name": "Malloc disk", 00:19:36.585 "block_size": 512, 00:19:36.585 "num_blocks": 65536, 00:19:36.585 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:36.585 "assigned_rate_limits": { 00:19:36.585 "rw_ios_per_sec": 0, 00:19:36.585 "rw_mbytes_per_sec": 0, 00:19:36.585 "r_mbytes_per_sec": 0, 00:19:36.585 "w_mbytes_per_sec": 0 00:19:36.585 }, 00:19:36.585 "claimed": false, 00:19:36.585 "zoned": false, 00:19:36.585 "supported_io_types": { 00:19:36.585 "read": true, 00:19:36.585 "write": true, 00:19:36.585 "unmap": true, 00:19:36.585 "flush": true, 00:19:36.585 "reset": true, 00:19:36.585 "nvme_admin": false, 00:19:36.585 "nvme_io": false, 00:19:36.585 "nvme_io_md": false, 00:19:36.585 "write_zeroes": true, 00:19:36.585 "zcopy": true, 00:19:36.585 "get_zone_info": false, 00:19:36.585 "zone_management": false, 00:19:36.585 "zone_append": false, 00:19:36.585 "compare": false, 00:19:36.585 "compare_and_write": false, 00:19:36.585 "abort": true, 00:19:36.585 "seek_hole": false, 00:19:36.585 "seek_data": false, 00:19:36.585 "copy": true, 00:19:36.585 "nvme_iov_md": false 00:19:36.585 }, 00:19:36.585 "memory_domains": [ 00:19:36.585 { 00:19:36.585 "dma_device_id": "system", 00:19:36.585 "dma_device_type": 1 00:19:36.585 }, 00:19:36.585 { 00:19:36.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.585 "dma_device_type": 2 00:19:36.585 } 00:19:36.585 ], 00:19:36.585 "driver_specific": {} 00:19:36.585 } 00:19:36.585 ] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.585 [2024-11-20 07:16:18.676535] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:36.585 [2024-11-20 07:16:18.676656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:36.585 [2024-11-20 07:16:18.676697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.585 [2024-11-20 07:16:18.678977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:36.585 [2024-11-20 07:16:18.679052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.585 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.586 "name": "Existed_Raid", 00:19:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.586 "strip_size_kb": 64, 00:19:36.586 "state": "configuring", 00:19:36.586 "raid_level": "raid5f", 00:19:36.586 "superblock": false, 00:19:36.586 "num_base_bdevs": 4, 00:19:36.586 "num_base_bdevs_discovered": 3, 00:19:36.586 "num_base_bdevs_operational": 4, 00:19:36.586 "base_bdevs_list": [ 00:19:36.586 { 00:19:36.586 "name": "BaseBdev1", 00:19:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.586 "is_configured": false, 00:19:36.586 "data_offset": 0, 00:19:36.586 "data_size": 0 00:19:36.586 }, 00:19:36.586 { 00:19:36.586 "name": "BaseBdev2", 00:19:36.586 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:36.586 "is_configured": true, 00:19:36.586 "data_offset": 0, 00:19:36.586 "data_size": 65536 00:19:36.586 }, 00:19:36.586 { 00:19:36.586 "name": "BaseBdev3", 00:19:36.586 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:36.586 "is_configured": true, 00:19:36.586 "data_offset": 0, 00:19:36.586 "data_size": 65536 00:19:36.586 }, 00:19:36.586 { 00:19:36.586 "name": "BaseBdev4", 00:19:36.586 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:36.586 "is_configured": true, 00:19:36.586 "data_offset": 0, 00:19:36.586 "data_size": 65536 00:19:36.586 } 00:19:36.586 ] 00:19:36.586 }' 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.586 07:16:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.155 [2024-11-20 07:16:19.159894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.155 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.156 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.156 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.156 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.156 "name": "Existed_Raid", 00:19:37.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.156 "strip_size_kb": 64, 00:19:37.156 "state": "configuring", 00:19:37.156 "raid_level": "raid5f", 00:19:37.156 "superblock": false, 00:19:37.156 "num_base_bdevs": 4, 00:19:37.156 "num_base_bdevs_discovered": 2, 00:19:37.156 "num_base_bdevs_operational": 4, 00:19:37.156 "base_bdevs_list": [ 00:19:37.156 { 00:19:37.156 "name": "BaseBdev1", 00:19:37.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.156 "is_configured": false, 00:19:37.156 "data_offset": 0, 00:19:37.156 "data_size": 0 00:19:37.156 }, 00:19:37.156 { 00:19:37.156 "name": null, 00:19:37.156 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:37.156 "is_configured": false, 00:19:37.156 "data_offset": 0, 00:19:37.156 "data_size": 65536 00:19:37.156 }, 00:19:37.156 { 00:19:37.156 "name": "BaseBdev3", 00:19:37.156 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:37.156 "is_configured": true, 00:19:37.156 "data_offset": 0, 00:19:37.156 "data_size": 65536 00:19:37.156 }, 00:19:37.156 { 00:19:37.156 "name": "BaseBdev4", 00:19:37.156 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:37.156 "is_configured": true, 00:19:37.156 "data_offset": 0, 00:19:37.156 "data_size": 65536 00:19:37.156 } 00:19:37.156 ] 00:19:37.156 }' 00:19:37.156 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.156 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.415 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:37.415 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.415 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.415 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.415 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.673 [2024-11-20 07:16:19.726052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.673 BaseBdev1 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.673 [ 00:19:37.673 { 00:19:37.673 "name": "BaseBdev1", 00:19:37.673 "aliases": [ 00:19:37.673 "1cbe7a6c-e32d-49a8-a704-8a7b07d35163" 00:19:37.673 ], 00:19:37.673 "product_name": "Malloc disk", 00:19:37.673 "block_size": 512, 00:19:37.673 "num_blocks": 65536, 00:19:37.673 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:37.673 "assigned_rate_limits": { 00:19:37.673 "rw_ios_per_sec": 0, 00:19:37.673 "rw_mbytes_per_sec": 0, 00:19:37.673 "r_mbytes_per_sec": 0, 00:19:37.673 "w_mbytes_per_sec": 0 00:19:37.673 }, 00:19:37.673 "claimed": true, 00:19:37.673 "claim_type": "exclusive_write", 00:19:37.673 "zoned": false, 00:19:37.673 "supported_io_types": { 00:19:37.673 "read": true, 00:19:37.673 "write": true, 00:19:37.673 "unmap": true, 00:19:37.673 "flush": true, 00:19:37.673 "reset": true, 00:19:37.673 "nvme_admin": false, 00:19:37.673 "nvme_io": false, 00:19:37.673 "nvme_io_md": false, 00:19:37.673 "write_zeroes": true, 00:19:37.673 "zcopy": true, 00:19:37.673 "get_zone_info": false, 00:19:37.673 "zone_management": false, 00:19:37.673 "zone_append": false, 00:19:37.673 "compare": false, 00:19:37.673 "compare_and_write": false, 00:19:37.673 "abort": true, 00:19:37.673 "seek_hole": false, 00:19:37.673 "seek_data": false, 00:19:37.673 "copy": true, 00:19:37.673 "nvme_iov_md": false 00:19:37.673 }, 00:19:37.673 "memory_domains": [ 00:19:37.673 { 00:19:37.673 "dma_device_id": "system", 00:19:37.673 "dma_device_type": 1 00:19:37.673 }, 00:19:37.673 { 00:19:37.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.673 "dma_device_type": 2 00:19:37.673 } 00:19:37.673 ], 00:19:37.673 "driver_specific": {} 00:19:37.673 } 00:19:37.673 ] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.673 "name": "Existed_Raid", 00:19:37.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.673 "strip_size_kb": 64, 00:19:37.673 "state": "configuring", 00:19:37.673 "raid_level": "raid5f", 00:19:37.673 "superblock": false, 00:19:37.673 "num_base_bdevs": 4, 00:19:37.673 "num_base_bdevs_discovered": 3, 00:19:37.673 "num_base_bdevs_operational": 4, 00:19:37.673 "base_bdevs_list": [ 00:19:37.673 { 00:19:37.673 "name": "BaseBdev1", 00:19:37.673 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:37.673 "is_configured": true, 00:19:37.673 "data_offset": 0, 00:19:37.673 "data_size": 65536 00:19:37.673 }, 00:19:37.673 { 00:19:37.673 "name": null, 00:19:37.673 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:37.673 "is_configured": false, 00:19:37.673 "data_offset": 0, 00:19:37.673 "data_size": 65536 00:19:37.673 }, 00:19:37.673 { 00:19:37.673 "name": "BaseBdev3", 00:19:37.673 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:37.673 "is_configured": true, 00:19:37.673 "data_offset": 0, 00:19:37.673 "data_size": 65536 00:19:37.673 }, 00:19:37.673 { 00:19:37.673 "name": "BaseBdev4", 00:19:37.673 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:37.673 "is_configured": true, 00:19:37.673 "data_offset": 0, 00:19:37.673 "data_size": 65536 00:19:37.673 } 00:19:37.673 ] 00:19:37.673 }' 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.673 07:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.273 [2024-11-20 07:16:20.257378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.273 "name": "Existed_Raid", 00:19:38.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.273 "strip_size_kb": 64, 00:19:38.273 "state": "configuring", 00:19:38.273 "raid_level": "raid5f", 00:19:38.273 "superblock": false, 00:19:38.273 "num_base_bdevs": 4, 00:19:38.273 "num_base_bdevs_discovered": 2, 00:19:38.273 "num_base_bdevs_operational": 4, 00:19:38.273 "base_bdevs_list": [ 00:19:38.273 { 00:19:38.273 "name": "BaseBdev1", 00:19:38.273 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:38.273 "is_configured": true, 00:19:38.273 "data_offset": 0, 00:19:38.273 "data_size": 65536 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "name": null, 00:19:38.273 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:38.273 "is_configured": false, 00:19:38.273 "data_offset": 0, 00:19:38.273 "data_size": 65536 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "name": null, 00:19:38.273 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:38.273 "is_configured": false, 00:19:38.273 "data_offset": 0, 00:19:38.273 "data_size": 65536 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "name": "BaseBdev4", 00:19:38.273 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:38.273 "is_configured": true, 00:19:38.273 "data_offset": 0, 00:19:38.273 "data_size": 65536 00:19:38.273 } 00:19:38.273 ] 00:19:38.273 }' 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.273 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.533 [2024-11-20 07:16:20.776522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.533 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.792 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.792 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.792 "name": "Existed_Raid", 00:19:38.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.792 "strip_size_kb": 64, 00:19:38.792 "state": "configuring", 00:19:38.792 "raid_level": "raid5f", 00:19:38.792 "superblock": false, 00:19:38.792 "num_base_bdevs": 4, 00:19:38.792 "num_base_bdevs_discovered": 3, 00:19:38.792 "num_base_bdevs_operational": 4, 00:19:38.792 "base_bdevs_list": [ 00:19:38.792 { 00:19:38.792 "name": "BaseBdev1", 00:19:38.792 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:38.792 "is_configured": true, 00:19:38.792 "data_offset": 0, 00:19:38.792 "data_size": 65536 00:19:38.792 }, 00:19:38.792 { 00:19:38.792 "name": null, 00:19:38.792 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:38.792 "is_configured": false, 00:19:38.792 "data_offset": 0, 00:19:38.792 "data_size": 65536 00:19:38.792 }, 00:19:38.792 { 00:19:38.792 "name": "BaseBdev3", 00:19:38.792 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:38.792 "is_configured": true, 00:19:38.792 "data_offset": 0, 00:19:38.792 "data_size": 65536 00:19:38.792 }, 00:19:38.792 { 00:19:38.792 "name": "BaseBdev4", 00:19:38.792 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:38.792 "is_configured": true, 00:19:38.792 "data_offset": 0, 00:19:38.792 "data_size": 65536 00:19:38.792 } 00:19:38.792 ] 00:19:38.792 }' 00:19:38.792 07:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.792 07:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.052 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.311 [2024-11-20 07:16:21.319649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.311 "name": "Existed_Raid", 00:19:39.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.311 "strip_size_kb": 64, 00:19:39.311 "state": "configuring", 00:19:39.311 "raid_level": "raid5f", 00:19:39.311 "superblock": false, 00:19:39.311 "num_base_bdevs": 4, 00:19:39.311 "num_base_bdevs_discovered": 2, 00:19:39.311 "num_base_bdevs_operational": 4, 00:19:39.311 "base_bdevs_list": [ 00:19:39.311 { 00:19:39.311 "name": null, 00:19:39.311 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:39.311 "is_configured": false, 00:19:39.311 "data_offset": 0, 00:19:39.311 "data_size": 65536 00:19:39.311 }, 00:19:39.311 { 00:19:39.311 "name": null, 00:19:39.311 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:39.311 "is_configured": false, 00:19:39.311 "data_offset": 0, 00:19:39.311 "data_size": 65536 00:19:39.311 }, 00:19:39.311 { 00:19:39.311 "name": "BaseBdev3", 00:19:39.311 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:39.311 "is_configured": true, 00:19:39.311 "data_offset": 0, 00:19:39.311 "data_size": 65536 00:19:39.311 }, 00:19:39.311 { 00:19:39.311 "name": "BaseBdev4", 00:19:39.311 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:39.311 "is_configured": true, 00:19:39.311 "data_offset": 0, 00:19:39.311 "data_size": 65536 00:19:39.311 } 00:19:39.311 ] 00:19:39.311 }' 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.311 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 [2024-11-20 07:16:21.924660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.879 "name": "Existed_Raid", 00:19:39.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.879 "strip_size_kb": 64, 00:19:39.879 "state": "configuring", 00:19:39.879 "raid_level": "raid5f", 00:19:39.879 "superblock": false, 00:19:39.879 "num_base_bdevs": 4, 00:19:39.879 "num_base_bdevs_discovered": 3, 00:19:39.879 "num_base_bdevs_operational": 4, 00:19:39.879 "base_bdevs_list": [ 00:19:39.879 { 00:19:39.879 "name": null, 00:19:39.879 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:39.879 "is_configured": false, 00:19:39.879 "data_offset": 0, 00:19:39.879 "data_size": 65536 00:19:39.879 }, 00:19:39.879 { 00:19:39.879 "name": "BaseBdev2", 00:19:39.879 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:39.879 "is_configured": true, 00:19:39.879 "data_offset": 0, 00:19:39.879 "data_size": 65536 00:19:39.879 }, 00:19:39.879 { 00:19:39.879 "name": "BaseBdev3", 00:19:39.879 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:39.879 "is_configured": true, 00:19:39.879 "data_offset": 0, 00:19:39.879 "data_size": 65536 00:19:39.879 }, 00:19:39.879 { 00:19:39.879 "name": "BaseBdev4", 00:19:39.879 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:39.879 "is_configured": true, 00:19:39.879 "data_offset": 0, 00:19:39.879 "data_size": 65536 00:19:39.879 } 00:19:39.879 ] 00:19:39.879 }' 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.879 07:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1cbe7a6c-e32d-49a8-a704-8a7b07d35163 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 [2024-11-20 07:16:22.569323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:40.445 [2024-11-20 07:16:22.569486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:40.445 [2024-11-20 07:16:22.569501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:40.445 [2024-11-20 07:16:22.569814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:40.445 [2024-11-20 07:16:22.578199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:40.445 [2024-11-20 07:16:22.578223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:40.445 [2024-11-20 07:16:22.578569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.445 NewBaseBdev 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 [ 00:19:40.445 { 00:19:40.445 "name": "NewBaseBdev", 00:19:40.445 "aliases": [ 00:19:40.445 "1cbe7a6c-e32d-49a8-a704-8a7b07d35163" 00:19:40.445 ], 00:19:40.445 "product_name": "Malloc disk", 00:19:40.445 "block_size": 512, 00:19:40.445 "num_blocks": 65536, 00:19:40.445 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:40.445 "assigned_rate_limits": { 00:19:40.445 "rw_ios_per_sec": 0, 00:19:40.445 "rw_mbytes_per_sec": 0, 00:19:40.445 "r_mbytes_per_sec": 0, 00:19:40.445 "w_mbytes_per_sec": 0 00:19:40.445 }, 00:19:40.445 "claimed": true, 00:19:40.445 "claim_type": "exclusive_write", 00:19:40.445 "zoned": false, 00:19:40.445 "supported_io_types": { 00:19:40.445 "read": true, 00:19:40.445 "write": true, 00:19:40.445 "unmap": true, 00:19:40.445 "flush": true, 00:19:40.445 "reset": true, 00:19:40.445 "nvme_admin": false, 00:19:40.445 "nvme_io": false, 00:19:40.445 "nvme_io_md": false, 00:19:40.445 "write_zeroes": true, 00:19:40.445 "zcopy": true, 00:19:40.445 "get_zone_info": false, 00:19:40.445 "zone_management": false, 00:19:40.445 "zone_append": false, 00:19:40.445 "compare": false, 00:19:40.445 "compare_and_write": false, 00:19:40.445 "abort": true, 00:19:40.445 "seek_hole": false, 00:19:40.445 "seek_data": false, 00:19:40.445 "copy": true, 00:19:40.445 "nvme_iov_md": false 00:19:40.445 }, 00:19:40.445 "memory_domains": [ 00:19:40.445 { 00:19:40.445 "dma_device_id": "system", 00:19:40.445 "dma_device_type": 1 00:19:40.445 }, 00:19:40.445 { 00:19:40.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.445 "dma_device_type": 2 00:19:40.445 } 00:19:40.445 ], 00:19:40.445 "driver_specific": {} 00:19:40.445 } 00:19:40.445 ] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.445 "name": "Existed_Raid", 00:19:40.445 "uuid": "1ee5d0db-fb66-4141-92cb-67b61a6e4574", 00:19:40.445 "strip_size_kb": 64, 00:19:40.445 "state": "online", 00:19:40.445 "raid_level": "raid5f", 00:19:40.445 "superblock": false, 00:19:40.445 "num_base_bdevs": 4, 00:19:40.445 "num_base_bdevs_discovered": 4, 00:19:40.445 "num_base_bdevs_operational": 4, 00:19:40.445 "base_bdevs_list": [ 00:19:40.445 { 00:19:40.445 "name": "NewBaseBdev", 00:19:40.445 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:40.445 "is_configured": true, 00:19:40.445 "data_offset": 0, 00:19:40.445 "data_size": 65536 00:19:40.445 }, 00:19:40.445 { 00:19:40.445 "name": "BaseBdev2", 00:19:40.445 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:40.445 "is_configured": true, 00:19:40.445 "data_offset": 0, 00:19:40.445 "data_size": 65536 00:19:40.445 }, 00:19:40.445 { 00:19:40.445 "name": "BaseBdev3", 00:19:40.445 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:40.445 "is_configured": true, 00:19:40.445 "data_offset": 0, 00:19:40.445 "data_size": 65536 00:19:40.445 }, 00:19:40.445 { 00:19:40.445 "name": "BaseBdev4", 00:19:40.445 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:40.445 "is_configured": true, 00:19:40.445 "data_offset": 0, 00:19:40.445 "data_size": 65536 00:19:40.445 } 00:19:40.445 ] 00:19:40.445 }' 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.445 07:16:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.017 [2024-11-20 07:16:23.135858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.017 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.017 "name": "Existed_Raid", 00:19:41.017 "aliases": [ 00:19:41.017 "1ee5d0db-fb66-4141-92cb-67b61a6e4574" 00:19:41.017 ], 00:19:41.017 "product_name": "Raid Volume", 00:19:41.017 "block_size": 512, 00:19:41.017 "num_blocks": 196608, 00:19:41.018 "uuid": "1ee5d0db-fb66-4141-92cb-67b61a6e4574", 00:19:41.018 "assigned_rate_limits": { 00:19:41.018 "rw_ios_per_sec": 0, 00:19:41.018 "rw_mbytes_per_sec": 0, 00:19:41.018 "r_mbytes_per_sec": 0, 00:19:41.018 "w_mbytes_per_sec": 0 00:19:41.018 }, 00:19:41.018 "claimed": false, 00:19:41.018 "zoned": false, 00:19:41.018 "supported_io_types": { 00:19:41.018 "read": true, 00:19:41.018 "write": true, 00:19:41.018 "unmap": false, 00:19:41.018 "flush": false, 00:19:41.018 "reset": true, 00:19:41.018 "nvme_admin": false, 00:19:41.018 "nvme_io": false, 00:19:41.018 "nvme_io_md": false, 00:19:41.018 "write_zeroes": true, 00:19:41.018 "zcopy": false, 00:19:41.018 "get_zone_info": false, 00:19:41.018 "zone_management": false, 00:19:41.018 "zone_append": false, 00:19:41.018 "compare": false, 00:19:41.018 "compare_and_write": false, 00:19:41.018 "abort": false, 00:19:41.018 "seek_hole": false, 00:19:41.018 "seek_data": false, 00:19:41.018 "copy": false, 00:19:41.018 "nvme_iov_md": false 00:19:41.018 }, 00:19:41.018 "driver_specific": { 00:19:41.018 "raid": { 00:19:41.018 "uuid": "1ee5d0db-fb66-4141-92cb-67b61a6e4574", 00:19:41.018 "strip_size_kb": 64, 00:19:41.018 "state": "online", 00:19:41.018 "raid_level": "raid5f", 00:19:41.018 "superblock": false, 00:19:41.018 "num_base_bdevs": 4, 00:19:41.018 "num_base_bdevs_discovered": 4, 00:19:41.018 "num_base_bdevs_operational": 4, 00:19:41.018 "base_bdevs_list": [ 00:19:41.018 { 00:19:41.018 "name": "NewBaseBdev", 00:19:41.018 "uuid": "1cbe7a6c-e32d-49a8-a704-8a7b07d35163", 00:19:41.018 "is_configured": true, 00:19:41.018 "data_offset": 0, 00:19:41.018 "data_size": 65536 00:19:41.018 }, 00:19:41.018 { 00:19:41.018 "name": "BaseBdev2", 00:19:41.018 "uuid": "0bd90283-a7de-4769-b7bc-6ee981030c43", 00:19:41.018 "is_configured": true, 00:19:41.018 "data_offset": 0, 00:19:41.018 "data_size": 65536 00:19:41.018 }, 00:19:41.018 { 00:19:41.018 "name": "BaseBdev3", 00:19:41.018 "uuid": "a6efdb66-713e-4fc9-a1a5-8a60bc404d19", 00:19:41.018 "is_configured": true, 00:19:41.018 "data_offset": 0, 00:19:41.018 "data_size": 65536 00:19:41.018 }, 00:19:41.018 { 00:19:41.018 "name": "BaseBdev4", 00:19:41.018 "uuid": "436ed4f4-76d4-4465-9c00-b0539883e696", 00:19:41.018 "is_configured": true, 00:19:41.018 "data_offset": 0, 00:19:41.018 "data_size": 65536 00:19:41.018 } 00:19:41.018 ] 00:19:41.018 } 00:19:41.018 } 00:19:41.018 }' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:41.018 BaseBdev2 00:19:41.018 BaseBdev3 00:19:41.018 BaseBdev4' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.018 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.277 [2024-11-20 07:16:23.451043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.277 [2024-11-20 07:16:23.451074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.277 [2024-11-20 07:16:23.451160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.277 [2024-11-20 07:16:23.451518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.277 [2024-11-20 07:16:23.451531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.277 07:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83269 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83269 ']' 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83269 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83269 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83269' 00:19:41.278 killing process with pid 83269 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83269 00:19:41.278 07:16:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83269 00:19:41.278 [2024-11-20 07:16:23.498680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.846 [2024-11-20 07:16:23.980781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:43.221 00:19:43.221 real 0m12.413s 00:19:43.221 user 0m19.579s 00:19:43.221 sys 0m2.102s 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.221 ************************************ 00:19:43.221 END TEST raid5f_state_function_test 00:19:43.221 ************************************ 00:19:43.221 07:16:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:43.221 07:16:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:43.221 07:16:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.221 07:16:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.221 ************************************ 00:19:43.221 START TEST raid5f_state_function_test_sb 00:19:43.221 ************************************ 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:43.221 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:43.222 Process raid pid: 83945 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83945 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83945' 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83945 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83945 ']' 00:19:43.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.222 07:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.483 [2024-11-20 07:16:25.509550] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:43.483 [2024-11-20 07:16:25.509819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.483 [2024-11-20 07:16:25.676277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.765 [2024-11-20 07:16:25.822367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.038 [2024-11-20 07:16:26.067457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.038 [2024-11-20 07:16:26.067509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.297 [2024-11-20 07:16:26.444061] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.297 [2024-11-20 07:16:26.444227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.297 [2024-11-20 07:16:26.444254] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.297 [2024-11-20 07:16:26.444270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.297 [2024-11-20 07:16:26.444279] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.297 [2024-11-20 07:16:26.444291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.297 [2024-11-20 07:16:26.444300] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:44.297 [2024-11-20 07:16:26.444312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.297 "name": "Existed_Raid", 00:19:44.297 "uuid": "aaa54ae6-87d6-45d5-9ccc-1a7ad5eac02e", 00:19:44.297 "strip_size_kb": 64, 00:19:44.297 "state": "configuring", 00:19:44.297 "raid_level": "raid5f", 00:19:44.297 "superblock": true, 00:19:44.297 "num_base_bdevs": 4, 00:19:44.297 "num_base_bdevs_discovered": 0, 00:19:44.297 "num_base_bdevs_operational": 4, 00:19:44.297 "base_bdevs_list": [ 00:19:44.297 { 00:19:44.297 "name": "BaseBdev1", 00:19:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.297 "is_configured": false, 00:19:44.297 "data_offset": 0, 00:19:44.297 "data_size": 0 00:19:44.297 }, 00:19:44.297 { 00:19:44.297 "name": "BaseBdev2", 00:19:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.297 "is_configured": false, 00:19:44.297 "data_offset": 0, 00:19:44.297 "data_size": 0 00:19:44.297 }, 00:19:44.297 { 00:19:44.297 "name": "BaseBdev3", 00:19:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.297 "is_configured": false, 00:19:44.297 "data_offset": 0, 00:19:44.297 "data_size": 0 00:19:44.297 }, 00:19:44.297 { 00:19:44.297 "name": "BaseBdev4", 00:19:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.297 "is_configured": false, 00:19:44.297 "data_offset": 0, 00:19:44.297 "data_size": 0 00:19:44.297 } 00:19:44.297 ] 00:19:44.297 }' 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.297 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 [2024-11-20 07:16:26.943181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:44.865 [2024-11-20 07:16:26.943328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 [2024-11-20 07:16:26.955173] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.865 [2024-11-20 07:16:26.955300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.865 [2024-11-20 07:16:26.955345] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.865 [2024-11-20 07:16:26.955373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.865 [2024-11-20 07:16:26.955447] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.865 [2024-11-20 07:16:26.955484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.865 [2024-11-20 07:16:26.955513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:44.865 [2024-11-20 07:16:26.955539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.865 07:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 [2024-11-20 07:16:27.006422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.865 BaseBdev1 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 [ 00:19:44.865 { 00:19:44.865 "name": "BaseBdev1", 00:19:44.865 "aliases": [ 00:19:44.865 "2d28be90-4803-471d-886e-6995cdca48bb" 00:19:44.865 ], 00:19:44.865 "product_name": "Malloc disk", 00:19:44.865 "block_size": 512, 00:19:44.865 "num_blocks": 65536, 00:19:44.865 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:44.865 "assigned_rate_limits": { 00:19:44.865 "rw_ios_per_sec": 0, 00:19:44.865 "rw_mbytes_per_sec": 0, 00:19:44.865 "r_mbytes_per_sec": 0, 00:19:44.865 "w_mbytes_per_sec": 0 00:19:44.865 }, 00:19:44.865 "claimed": true, 00:19:44.865 "claim_type": "exclusive_write", 00:19:44.865 "zoned": false, 00:19:44.865 "supported_io_types": { 00:19:44.865 "read": true, 00:19:44.865 "write": true, 00:19:44.865 "unmap": true, 00:19:44.865 "flush": true, 00:19:44.865 "reset": true, 00:19:44.865 "nvme_admin": false, 00:19:44.865 "nvme_io": false, 00:19:44.865 "nvme_io_md": false, 00:19:44.865 "write_zeroes": true, 00:19:44.865 "zcopy": true, 00:19:44.865 "get_zone_info": false, 00:19:44.865 "zone_management": false, 00:19:44.865 "zone_append": false, 00:19:44.865 "compare": false, 00:19:44.865 "compare_and_write": false, 00:19:44.865 "abort": true, 00:19:44.865 "seek_hole": false, 00:19:44.865 "seek_data": false, 00:19:44.865 "copy": true, 00:19:44.865 "nvme_iov_md": false 00:19:44.865 }, 00:19:44.865 "memory_domains": [ 00:19:44.865 { 00:19:44.865 "dma_device_id": "system", 00:19:44.865 "dma_device_type": 1 00:19:44.865 }, 00:19:44.865 { 00:19:44.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.865 "dma_device_type": 2 00:19:44.865 } 00:19:44.865 ], 00:19:44.865 "driver_specific": {} 00:19:44.865 } 00:19:44.865 ] 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:44.865 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.866 "name": "Existed_Raid", 00:19:44.866 "uuid": "0ccf0e1c-c8e4-4e65-b94f-d5e55838e793", 00:19:44.866 "strip_size_kb": 64, 00:19:44.866 "state": "configuring", 00:19:44.866 "raid_level": "raid5f", 00:19:44.866 "superblock": true, 00:19:44.866 "num_base_bdevs": 4, 00:19:44.866 "num_base_bdevs_discovered": 1, 00:19:44.866 "num_base_bdevs_operational": 4, 00:19:44.866 "base_bdevs_list": [ 00:19:44.866 { 00:19:44.866 "name": "BaseBdev1", 00:19:44.866 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:44.866 "is_configured": true, 00:19:44.866 "data_offset": 2048, 00:19:44.866 "data_size": 63488 00:19:44.866 }, 00:19:44.866 { 00:19:44.866 "name": "BaseBdev2", 00:19:44.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.866 "is_configured": false, 00:19:44.866 "data_offset": 0, 00:19:44.866 "data_size": 0 00:19:44.866 }, 00:19:44.866 { 00:19:44.866 "name": "BaseBdev3", 00:19:44.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.866 "is_configured": false, 00:19:44.866 "data_offset": 0, 00:19:44.866 "data_size": 0 00:19:44.866 }, 00:19:44.866 { 00:19:44.866 "name": "BaseBdev4", 00:19:44.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.866 "is_configured": false, 00:19:44.866 "data_offset": 0, 00:19:44.866 "data_size": 0 00:19:44.866 } 00:19:44.866 ] 00:19:44.866 }' 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.866 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.434 [2024-11-20 07:16:27.505649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.434 [2024-11-20 07:16:27.505791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.434 [2024-11-20 07:16:27.517728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.434 [2024-11-20 07:16:27.519988] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.434 [2024-11-20 07:16:27.520084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.434 [2024-11-20 07:16:27.520130] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:45.434 [2024-11-20 07:16:27.520161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:45.434 [2024-11-20 07:16:27.520204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:45.434 [2024-11-20 07:16:27.520231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.434 "name": "Existed_Raid", 00:19:45.434 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:45.434 "strip_size_kb": 64, 00:19:45.434 "state": "configuring", 00:19:45.434 "raid_level": "raid5f", 00:19:45.434 "superblock": true, 00:19:45.434 "num_base_bdevs": 4, 00:19:45.434 "num_base_bdevs_discovered": 1, 00:19:45.434 "num_base_bdevs_operational": 4, 00:19:45.434 "base_bdevs_list": [ 00:19:45.434 { 00:19:45.434 "name": "BaseBdev1", 00:19:45.434 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:45.434 "is_configured": true, 00:19:45.434 "data_offset": 2048, 00:19:45.434 "data_size": 63488 00:19:45.434 }, 00:19:45.434 { 00:19:45.434 "name": "BaseBdev2", 00:19:45.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.434 "is_configured": false, 00:19:45.434 "data_offset": 0, 00:19:45.434 "data_size": 0 00:19:45.434 }, 00:19:45.434 { 00:19:45.434 "name": "BaseBdev3", 00:19:45.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.434 "is_configured": false, 00:19:45.434 "data_offset": 0, 00:19:45.434 "data_size": 0 00:19:45.434 }, 00:19:45.434 { 00:19:45.434 "name": "BaseBdev4", 00:19:45.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.434 "is_configured": false, 00:19:45.434 "data_offset": 0, 00:19:45.434 "data_size": 0 00:19:45.434 } 00:19:45.434 ] 00:19:45.434 }' 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.434 07:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.002 [2024-11-20 07:16:28.058322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.002 BaseBdev2 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:46.002 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.003 [ 00:19:46.003 { 00:19:46.003 "name": "BaseBdev2", 00:19:46.003 "aliases": [ 00:19:46.003 "016d6ed4-49c4-45e7-9064-51cb27574680" 00:19:46.003 ], 00:19:46.003 "product_name": "Malloc disk", 00:19:46.003 "block_size": 512, 00:19:46.003 "num_blocks": 65536, 00:19:46.003 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:46.003 "assigned_rate_limits": { 00:19:46.003 "rw_ios_per_sec": 0, 00:19:46.003 "rw_mbytes_per_sec": 0, 00:19:46.003 "r_mbytes_per_sec": 0, 00:19:46.003 "w_mbytes_per_sec": 0 00:19:46.003 }, 00:19:46.003 "claimed": true, 00:19:46.003 "claim_type": "exclusive_write", 00:19:46.003 "zoned": false, 00:19:46.003 "supported_io_types": { 00:19:46.003 "read": true, 00:19:46.003 "write": true, 00:19:46.003 "unmap": true, 00:19:46.003 "flush": true, 00:19:46.003 "reset": true, 00:19:46.003 "nvme_admin": false, 00:19:46.003 "nvme_io": false, 00:19:46.003 "nvme_io_md": false, 00:19:46.003 "write_zeroes": true, 00:19:46.003 "zcopy": true, 00:19:46.003 "get_zone_info": false, 00:19:46.003 "zone_management": false, 00:19:46.003 "zone_append": false, 00:19:46.003 "compare": false, 00:19:46.003 "compare_and_write": false, 00:19:46.003 "abort": true, 00:19:46.003 "seek_hole": false, 00:19:46.003 "seek_data": false, 00:19:46.003 "copy": true, 00:19:46.003 "nvme_iov_md": false 00:19:46.003 }, 00:19:46.003 "memory_domains": [ 00:19:46.003 { 00:19:46.003 "dma_device_id": "system", 00:19:46.003 "dma_device_type": 1 00:19:46.003 }, 00:19:46.003 { 00:19:46.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.003 "dma_device_type": 2 00:19:46.003 } 00:19:46.003 ], 00:19:46.003 "driver_specific": {} 00:19:46.003 } 00:19:46.003 ] 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.003 "name": "Existed_Raid", 00:19:46.003 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:46.003 "strip_size_kb": 64, 00:19:46.003 "state": "configuring", 00:19:46.003 "raid_level": "raid5f", 00:19:46.003 "superblock": true, 00:19:46.003 "num_base_bdevs": 4, 00:19:46.003 "num_base_bdevs_discovered": 2, 00:19:46.003 "num_base_bdevs_operational": 4, 00:19:46.003 "base_bdevs_list": [ 00:19:46.003 { 00:19:46.003 "name": "BaseBdev1", 00:19:46.003 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:46.003 "is_configured": true, 00:19:46.003 "data_offset": 2048, 00:19:46.003 "data_size": 63488 00:19:46.003 }, 00:19:46.003 { 00:19:46.003 "name": "BaseBdev2", 00:19:46.003 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:46.003 "is_configured": true, 00:19:46.003 "data_offset": 2048, 00:19:46.003 "data_size": 63488 00:19:46.003 }, 00:19:46.003 { 00:19:46.003 "name": "BaseBdev3", 00:19:46.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.003 "is_configured": false, 00:19:46.003 "data_offset": 0, 00:19:46.003 "data_size": 0 00:19:46.003 }, 00:19:46.003 { 00:19:46.003 "name": "BaseBdev4", 00:19:46.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.003 "is_configured": false, 00:19:46.003 "data_offset": 0, 00:19:46.003 "data_size": 0 00:19:46.003 } 00:19:46.003 ] 00:19:46.003 }' 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.003 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 [2024-11-20 07:16:28.626525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.595 BaseBdev3 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 [ 00:19:46.595 { 00:19:46.595 "name": "BaseBdev3", 00:19:46.595 "aliases": [ 00:19:46.595 "019f5483-bb53-4a7a-a985-09be8e90c12a" 00:19:46.595 ], 00:19:46.595 "product_name": "Malloc disk", 00:19:46.595 "block_size": 512, 00:19:46.595 "num_blocks": 65536, 00:19:46.595 "uuid": "019f5483-bb53-4a7a-a985-09be8e90c12a", 00:19:46.595 "assigned_rate_limits": { 00:19:46.595 "rw_ios_per_sec": 0, 00:19:46.595 "rw_mbytes_per_sec": 0, 00:19:46.595 "r_mbytes_per_sec": 0, 00:19:46.595 "w_mbytes_per_sec": 0 00:19:46.595 }, 00:19:46.595 "claimed": true, 00:19:46.595 "claim_type": "exclusive_write", 00:19:46.595 "zoned": false, 00:19:46.595 "supported_io_types": { 00:19:46.595 "read": true, 00:19:46.595 "write": true, 00:19:46.595 "unmap": true, 00:19:46.595 "flush": true, 00:19:46.595 "reset": true, 00:19:46.595 "nvme_admin": false, 00:19:46.595 "nvme_io": false, 00:19:46.595 "nvme_io_md": false, 00:19:46.595 "write_zeroes": true, 00:19:46.595 "zcopy": true, 00:19:46.595 "get_zone_info": false, 00:19:46.595 "zone_management": false, 00:19:46.595 "zone_append": false, 00:19:46.595 "compare": false, 00:19:46.595 "compare_and_write": false, 00:19:46.595 "abort": true, 00:19:46.595 "seek_hole": false, 00:19:46.595 "seek_data": false, 00:19:46.595 "copy": true, 00:19:46.595 "nvme_iov_md": false 00:19:46.595 }, 00:19:46.595 "memory_domains": [ 00:19:46.595 { 00:19:46.595 "dma_device_id": "system", 00:19:46.595 "dma_device_type": 1 00:19:46.595 }, 00:19:46.595 { 00:19:46.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.595 "dma_device_type": 2 00:19:46.595 } 00:19:46.595 ], 00:19:46.595 "driver_specific": {} 00:19:46.595 } 00:19:46.595 ] 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.596 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.596 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.596 "name": "Existed_Raid", 00:19:46.596 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:46.596 "strip_size_kb": 64, 00:19:46.596 "state": "configuring", 00:19:46.596 "raid_level": "raid5f", 00:19:46.596 "superblock": true, 00:19:46.596 "num_base_bdevs": 4, 00:19:46.596 "num_base_bdevs_discovered": 3, 00:19:46.596 "num_base_bdevs_operational": 4, 00:19:46.596 "base_bdevs_list": [ 00:19:46.596 { 00:19:46.596 "name": "BaseBdev1", 00:19:46.596 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:46.596 "is_configured": true, 00:19:46.596 "data_offset": 2048, 00:19:46.596 "data_size": 63488 00:19:46.596 }, 00:19:46.596 { 00:19:46.596 "name": "BaseBdev2", 00:19:46.596 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:46.596 "is_configured": true, 00:19:46.596 "data_offset": 2048, 00:19:46.596 "data_size": 63488 00:19:46.596 }, 00:19:46.596 { 00:19:46.596 "name": "BaseBdev3", 00:19:46.596 "uuid": "019f5483-bb53-4a7a-a985-09be8e90c12a", 00:19:46.596 "is_configured": true, 00:19:46.596 "data_offset": 2048, 00:19:46.596 "data_size": 63488 00:19:46.596 }, 00:19:46.596 { 00:19:46.596 "name": "BaseBdev4", 00:19:46.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.596 "is_configured": false, 00:19:46.596 "data_offset": 0, 00:19:46.596 "data_size": 0 00:19:46.596 } 00:19:46.596 ] 00:19:46.596 }' 00:19:46.596 07:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.596 07:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.161 [2024-11-20 07:16:29.203111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:47.161 [2024-11-20 07:16:29.203605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:47.161 [2024-11-20 07:16:29.203627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:47.161 [2024-11-20 07:16:29.203969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:47.161 BaseBdev4 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.161 [2024-11-20 07:16:29.214075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:47.161 [2024-11-20 07:16:29.214221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:47.161 [2024-11-20 07:16:29.214710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:47.161 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.162 [ 00:19:47.162 { 00:19:47.162 "name": "BaseBdev4", 00:19:47.162 "aliases": [ 00:19:47.162 "ba3b880f-771b-421b-a4c4-9d2f522983fe" 00:19:47.162 ], 00:19:47.162 "product_name": "Malloc disk", 00:19:47.162 "block_size": 512, 00:19:47.162 "num_blocks": 65536, 00:19:47.162 "uuid": "ba3b880f-771b-421b-a4c4-9d2f522983fe", 00:19:47.162 "assigned_rate_limits": { 00:19:47.162 "rw_ios_per_sec": 0, 00:19:47.162 "rw_mbytes_per_sec": 0, 00:19:47.162 "r_mbytes_per_sec": 0, 00:19:47.162 "w_mbytes_per_sec": 0 00:19:47.162 }, 00:19:47.162 "claimed": true, 00:19:47.162 "claim_type": "exclusive_write", 00:19:47.162 "zoned": false, 00:19:47.162 "supported_io_types": { 00:19:47.162 "read": true, 00:19:47.162 "write": true, 00:19:47.162 "unmap": true, 00:19:47.162 "flush": true, 00:19:47.162 "reset": true, 00:19:47.162 "nvme_admin": false, 00:19:47.162 "nvme_io": false, 00:19:47.162 "nvme_io_md": false, 00:19:47.162 "write_zeroes": true, 00:19:47.162 "zcopy": true, 00:19:47.162 "get_zone_info": false, 00:19:47.162 "zone_management": false, 00:19:47.162 "zone_append": false, 00:19:47.162 "compare": false, 00:19:47.162 "compare_and_write": false, 00:19:47.162 "abort": true, 00:19:47.162 "seek_hole": false, 00:19:47.162 "seek_data": false, 00:19:47.162 "copy": true, 00:19:47.162 "nvme_iov_md": false 00:19:47.162 }, 00:19:47.162 "memory_domains": [ 00:19:47.162 { 00:19:47.162 "dma_device_id": "system", 00:19:47.162 "dma_device_type": 1 00:19:47.162 }, 00:19:47.162 { 00:19:47.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.162 "dma_device_type": 2 00:19:47.162 } 00:19:47.162 ], 00:19:47.162 "driver_specific": {} 00:19:47.162 } 00:19:47.162 ] 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.162 "name": "Existed_Raid", 00:19:47.162 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:47.162 "strip_size_kb": 64, 00:19:47.162 "state": "online", 00:19:47.162 "raid_level": "raid5f", 00:19:47.162 "superblock": true, 00:19:47.162 "num_base_bdevs": 4, 00:19:47.162 "num_base_bdevs_discovered": 4, 00:19:47.162 "num_base_bdevs_operational": 4, 00:19:47.162 "base_bdevs_list": [ 00:19:47.162 { 00:19:47.162 "name": "BaseBdev1", 00:19:47.162 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:47.162 "is_configured": true, 00:19:47.162 "data_offset": 2048, 00:19:47.162 "data_size": 63488 00:19:47.162 }, 00:19:47.162 { 00:19:47.162 "name": "BaseBdev2", 00:19:47.162 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:47.162 "is_configured": true, 00:19:47.162 "data_offset": 2048, 00:19:47.162 "data_size": 63488 00:19:47.162 }, 00:19:47.162 { 00:19:47.162 "name": "BaseBdev3", 00:19:47.162 "uuid": "019f5483-bb53-4a7a-a985-09be8e90c12a", 00:19:47.162 "is_configured": true, 00:19:47.162 "data_offset": 2048, 00:19:47.162 "data_size": 63488 00:19:47.162 }, 00:19:47.162 { 00:19:47.162 "name": "BaseBdev4", 00:19:47.162 "uuid": "ba3b880f-771b-421b-a4c4-9d2f522983fe", 00:19:47.162 "is_configured": true, 00:19:47.162 "data_offset": 2048, 00:19:47.162 "data_size": 63488 00:19:47.162 } 00:19:47.162 ] 00:19:47.162 }' 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.162 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.727 [2024-11-20 07:16:29.749383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.727 "name": "Existed_Raid", 00:19:47.727 "aliases": [ 00:19:47.727 "02323365-75cd-4590-801b-cc35ccc0abd2" 00:19:47.727 ], 00:19:47.727 "product_name": "Raid Volume", 00:19:47.727 "block_size": 512, 00:19:47.727 "num_blocks": 190464, 00:19:47.727 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:47.727 "assigned_rate_limits": { 00:19:47.727 "rw_ios_per_sec": 0, 00:19:47.727 "rw_mbytes_per_sec": 0, 00:19:47.727 "r_mbytes_per_sec": 0, 00:19:47.727 "w_mbytes_per_sec": 0 00:19:47.727 }, 00:19:47.727 "claimed": false, 00:19:47.727 "zoned": false, 00:19:47.727 "supported_io_types": { 00:19:47.727 "read": true, 00:19:47.727 "write": true, 00:19:47.727 "unmap": false, 00:19:47.727 "flush": false, 00:19:47.727 "reset": true, 00:19:47.727 "nvme_admin": false, 00:19:47.727 "nvme_io": false, 00:19:47.727 "nvme_io_md": false, 00:19:47.727 "write_zeroes": true, 00:19:47.727 "zcopy": false, 00:19:47.727 "get_zone_info": false, 00:19:47.727 "zone_management": false, 00:19:47.727 "zone_append": false, 00:19:47.727 "compare": false, 00:19:47.727 "compare_and_write": false, 00:19:47.727 "abort": false, 00:19:47.727 "seek_hole": false, 00:19:47.727 "seek_data": false, 00:19:47.727 "copy": false, 00:19:47.727 "nvme_iov_md": false 00:19:47.727 }, 00:19:47.727 "driver_specific": { 00:19:47.727 "raid": { 00:19:47.727 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:47.727 "strip_size_kb": 64, 00:19:47.727 "state": "online", 00:19:47.727 "raid_level": "raid5f", 00:19:47.727 "superblock": true, 00:19:47.727 "num_base_bdevs": 4, 00:19:47.727 "num_base_bdevs_discovered": 4, 00:19:47.727 "num_base_bdevs_operational": 4, 00:19:47.727 "base_bdevs_list": [ 00:19:47.727 { 00:19:47.727 "name": "BaseBdev1", 00:19:47.727 "uuid": "2d28be90-4803-471d-886e-6995cdca48bb", 00:19:47.727 "is_configured": true, 00:19:47.727 "data_offset": 2048, 00:19:47.727 "data_size": 63488 00:19:47.727 }, 00:19:47.727 { 00:19:47.727 "name": "BaseBdev2", 00:19:47.727 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:47.727 "is_configured": true, 00:19:47.727 "data_offset": 2048, 00:19:47.727 "data_size": 63488 00:19:47.727 }, 00:19:47.727 { 00:19:47.727 "name": "BaseBdev3", 00:19:47.727 "uuid": "019f5483-bb53-4a7a-a985-09be8e90c12a", 00:19:47.727 "is_configured": true, 00:19:47.727 "data_offset": 2048, 00:19:47.727 "data_size": 63488 00:19:47.727 }, 00:19:47.727 { 00:19:47.727 "name": "BaseBdev4", 00:19:47.727 "uuid": "ba3b880f-771b-421b-a4c4-9d2f522983fe", 00:19:47.727 "is_configured": true, 00:19:47.727 "data_offset": 2048, 00:19:47.727 "data_size": 63488 00:19:47.727 } 00:19:47.727 ] 00:19:47.727 } 00:19:47.727 } 00:19:47.727 }' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:47.727 BaseBdev2 00:19:47.727 BaseBdev3 00:19:47.727 BaseBdev4' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.727 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.728 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.986 07:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 [2024-11-20 07:16:30.084884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.986 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.987 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.987 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.987 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.987 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.246 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.246 "name": "Existed_Raid", 00:19:48.246 "uuid": "02323365-75cd-4590-801b-cc35ccc0abd2", 00:19:48.246 "strip_size_kb": 64, 00:19:48.246 "state": "online", 00:19:48.246 "raid_level": "raid5f", 00:19:48.246 "superblock": true, 00:19:48.246 "num_base_bdevs": 4, 00:19:48.246 "num_base_bdevs_discovered": 3, 00:19:48.246 "num_base_bdevs_operational": 3, 00:19:48.246 "base_bdevs_list": [ 00:19:48.246 { 00:19:48.246 "name": null, 00:19:48.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.246 "is_configured": false, 00:19:48.246 "data_offset": 0, 00:19:48.246 "data_size": 63488 00:19:48.246 }, 00:19:48.246 { 00:19:48.246 "name": "BaseBdev2", 00:19:48.246 "uuid": "016d6ed4-49c4-45e7-9064-51cb27574680", 00:19:48.246 "is_configured": true, 00:19:48.246 "data_offset": 2048, 00:19:48.246 "data_size": 63488 00:19:48.246 }, 00:19:48.246 { 00:19:48.246 "name": "BaseBdev3", 00:19:48.246 "uuid": "019f5483-bb53-4a7a-a985-09be8e90c12a", 00:19:48.246 "is_configured": true, 00:19:48.246 "data_offset": 2048, 00:19:48.246 "data_size": 63488 00:19:48.246 }, 00:19:48.246 { 00:19:48.246 "name": "BaseBdev4", 00:19:48.246 "uuid": "ba3b880f-771b-421b-a4c4-9d2f522983fe", 00:19:48.246 "is_configured": true, 00:19:48.246 "data_offset": 2048, 00:19:48.246 "data_size": 63488 00:19:48.246 } 00:19:48.246 ] 00:19:48.246 }' 00:19:48.246 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.246 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:48.506 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.507 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.507 [2024-11-20 07:16:30.740230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:48.507 [2024-11-20 07:16:30.740517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.765 [2024-11-20 07:16:30.880638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.765 07:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.765 [2024-11-20 07:16:30.936607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.024 [2024-11-20 07:16:31.092567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:49.024 [2024-11-20 07:16:31.092698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.024 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.284 BaseBdev2 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.284 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.284 [ 00:19:49.284 { 00:19:49.284 "name": "BaseBdev2", 00:19:49.284 "aliases": [ 00:19:49.284 "85dc28e0-fd95-4086-a366-d0124495f4f0" 00:19:49.284 ], 00:19:49.284 "product_name": "Malloc disk", 00:19:49.284 "block_size": 512, 00:19:49.284 "num_blocks": 65536, 00:19:49.284 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:49.284 "assigned_rate_limits": { 00:19:49.284 "rw_ios_per_sec": 0, 00:19:49.284 "rw_mbytes_per_sec": 0, 00:19:49.284 "r_mbytes_per_sec": 0, 00:19:49.284 "w_mbytes_per_sec": 0 00:19:49.284 }, 00:19:49.284 "claimed": false, 00:19:49.284 "zoned": false, 00:19:49.284 "supported_io_types": { 00:19:49.284 "read": true, 00:19:49.284 "write": true, 00:19:49.284 "unmap": true, 00:19:49.284 "flush": true, 00:19:49.284 "reset": true, 00:19:49.284 "nvme_admin": false, 00:19:49.284 "nvme_io": false, 00:19:49.284 "nvme_io_md": false, 00:19:49.284 "write_zeroes": true, 00:19:49.284 "zcopy": true, 00:19:49.284 "get_zone_info": false, 00:19:49.284 "zone_management": false, 00:19:49.285 "zone_append": false, 00:19:49.285 "compare": false, 00:19:49.285 "compare_and_write": false, 00:19:49.285 "abort": true, 00:19:49.285 "seek_hole": false, 00:19:49.285 "seek_data": false, 00:19:49.285 "copy": true, 00:19:49.285 "nvme_iov_md": false 00:19:49.285 }, 00:19:49.285 "memory_domains": [ 00:19:49.285 { 00:19:49.285 "dma_device_id": "system", 00:19:49.285 "dma_device_type": 1 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.285 "dma_device_type": 2 00:19:49.285 } 00:19:49.285 ], 00:19:49.285 "driver_specific": {} 00:19:49.285 } 00:19:49.285 ] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 BaseBdev3 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 [ 00:19:49.285 { 00:19:49.285 "name": "BaseBdev3", 00:19:49.285 "aliases": [ 00:19:49.285 "fad26966-f9d4-4bb2-9ec4-367ca2b448e8" 00:19:49.285 ], 00:19:49.285 "product_name": "Malloc disk", 00:19:49.285 "block_size": 512, 00:19:49.285 "num_blocks": 65536, 00:19:49.285 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:49.285 "assigned_rate_limits": { 00:19:49.285 "rw_ios_per_sec": 0, 00:19:49.285 "rw_mbytes_per_sec": 0, 00:19:49.285 "r_mbytes_per_sec": 0, 00:19:49.285 "w_mbytes_per_sec": 0 00:19:49.285 }, 00:19:49.285 "claimed": false, 00:19:49.285 "zoned": false, 00:19:49.285 "supported_io_types": { 00:19:49.285 "read": true, 00:19:49.285 "write": true, 00:19:49.285 "unmap": true, 00:19:49.285 "flush": true, 00:19:49.285 "reset": true, 00:19:49.285 "nvme_admin": false, 00:19:49.285 "nvme_io": false, 00:19:49.285 "nvme_io_md": false, 00:19:49.285 "write_zeroes": true, 00:19:49.285 "zcopy": true, 00:19:49.285 "get_zone_info": false, 00:19:49.285 "zone_management": false, 00:19:49.285 "zone_append": false, 00:19:49.285 "compare": false, 00:19:49.285 "compare_and_write": false, 00:19:49.285 "abort": true, 00:19:49.285 "seek_hole": false, 00:19:49.285 "seek_data": false, 00:19:49.285 "copy": true, 00:19:49.285 "nvme_iov_md": false 00:19:49.285 }, 00:19:49.285 "memory_domains": [ 00:19:49.285 { 00:19:49.285 "dma_device_id": "system", 00:19:49.285 "dma_device_type": 1 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.285 "dma_device_type": 2 00:19:49.285 } 00:19:49.285 ], 00:19:49.285 "driver_specific": {} 00:19:49.285 } 00:19:49.285 ] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 BaseBdev4 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 [ 00:19:49.285 { 00:19:49.285 "name": "BaseBdev4", 00:19:49.285 "aliases": [ 00:19:49.285 "b3570576-fa91-4d9b-980f-a2d9bc174b9c" 00:19:49.285 ], 00:19:49.285 "product_name": "Malloc disk", 00:19:49.285 "block_size": 512, 00:19:49.285 "num_blocks": 65536, 00:19:49.285 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:49.285 "assigned_rate_limits": { 00:19:49.285 "rw_ios_per_sec": 0, 00:19:49.285 "rw_mbytes_per_sec": 0, 00:19:49.285 "r_mbytes_per_sec": 0, 00:19:49.285 "w_mbytes_per_sec": 0 00:19:49.285 }, 00:19:49.285 "claimed": false, 00:19:49.285 "zoned": false, 00:19:49.285 "supported_io_types": { 00:19:49.285 "read": true, 00:19:49.285 "write": true, 00:19:49.285 "unmap": true, 00:19:49.285 "flush": true, 00:19:49.285 "reset": true, 00:19:49.285 "nvme_admin": false, 00:19:49.285 "nvme_io": false, 00:19:49.285 "nvme_io_md": false, 00:19:49.285 "write_zeroes": true, 00:19:49.285 "zcopy": true, 00:19:49.285 "get_zone_info": false, 00:19:49.285 "zone_management": false, 00:19:49.285 "zone_append": false, 00:19:49.285 "compare": false, 00:19:49.285 "compare_and_write": false, 00:19:49.285 "abort": true, 00:19:49.285 "seek_hole": false, 00:19:49.285 "seek_data": false, 00:19:49.285 "copy": true, 00:19:49.285 "nvme_iov_md": false 00:19:49.285 }, 00:19:49.285 "memory_domains": [ 00:19:49.285 { 00:19:49.285 "dma_device_id": "system", 00:19:49.285 "dma_device_type": 1 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.285 "dma_device_type": 2 00:19:49.285 } 00:19:49.285 ], 00:19:49.285 "driver_specific": {} 00:19:49.285 } 00:19:49.285 ] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 [2024-11-20 07:16:31.528406] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:49.285 [2024-11-20 07:16:31.528553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:49.285 [2024-11-20 07:16:31.528614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.285 [2024-11-20 07:16:31.531079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.285 [2024-11-20 07:16:31.531226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.285 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.286 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.543 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.543 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.543 "name": "Existed_Raid", 00:19:49.543 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:49.543 "strip_size_kb": 64, 00:19:49.543 "state": "configuring", 00:19:49.543 "raid_level": "raid5f", 00:19:49.543 "superblock": true, 00:19:49.543 "num_base_bdevs": 4, 00:19:49.543 "num_base_bdevs_discovered": 3, 00:19:49.543 "num_base_bdevs_operational": 4, 00:19:49.543 "base_bdevs_list": [ 00:19:49.543 { 00:19:49.543 "name": "BaseBdev1", 00:19:49.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.543 "is_configured": false, 00:19:49.543 "data_offset": 0, 00:19:49.543 "data_size": 0 00:19:49.543 }, 00:19:49.543 { 00:19:49.543 "name": "BaseBdev2", 00:19:49.543 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:49.543 "is_configured": true, 00:19:49.543 "data_offset": 2048, 00:19:49.543 "data_size": 63488 00:19:49.543 }, 00:19:49.543 { 00:19:49.543 "name": "BaseBdev3", 00:19:49.543 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:49.543 "is_configured": true, 00:19:49.543 "data_offset": 2048, 00:19:49.543 "data_size": 63488 00:19:49.543 }, 00:19:49.543 { 00:19:49.543 "name": "BaseBdev4", 00:19:49.543 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:49.543 "is_configured": true, 00:19:49.543 "data_offset": 2048, 00:19:49.543 "data_size": 63488 00:19:49.543 } 00:19:49.543 ] 00:19:49.543 }' 00:19:49.544 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.544 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.803 [2024-11-20 07:16:31.947657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.803 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.803 "name": "Existed_Raid", 00:19:49.803 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:49.803 "strip_size_kb": 64, 00:19:49.803 "state": "configuring", 00:19:49.803 "raid_level": "raid5f", 00:19:49.803 "superblock": true, 00:19:49.803 "num_base_bdevs": 4, 00:19:49.803 "num_base_bdevs_discovered": 2, 00:19:49.803 "num_base_bdevs_operational": 4, 00:19:49.803 "base_bdevs_list": [ 00:19:49.803 { 00:19:49.803 "name": "BaseBdev1", 00:19:49.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.803 "is_configured": false, 00:19:49.803 "data_offset": 0, 00:19:49.803 "data_size": 0 00:19:49.803 }, 00:19:49.803 { 00:19:49.803 "name": null, 00:19:49.803 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:49.803 "is_configured": false, 00:19:49.803 "data_offset": 0, 00:19:49.803 "data_size": 63488 00:19:49.803 }, 00:19:49.803 { 00:19:49.803 "name": "BaseBdev3", 00:19:49.803 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:49.803 "is_configured": true, 00:19:49.803 "data_offset": 2048, 00:19:49.803 "data_size": 63488 00:19:49.803 }, 00:19:49.803 { 00:19:49.803 "name": "BaseBdev4", 00:19:49.803 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:49.804 "is_configured": true, 00:19:49.804 "data_offset": 2048, 00:19:49.804 "data_size": 63488 00:19:49.804 } 00:19:49.804 ] 00:19:49.804 }' 00:19:49.804 07:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.804 07:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.371 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.372 [2024-11-20 07:16:32.513087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.372 BaseBdev1 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.372 [ 00:19:50.372 { 00:19:50.372 "name": "BaseBdev1", 00:19:50.372 "aliases": [ 00:19:50.372 "59f98e3f-5b40-456f-b876-1b7597e93eee" 00:19:50.372 ], 00:19:50.372 "product_name": "Malloc disk", 00:19:50.372 "block_size": 512, 00:19:50.372 "num_blocks": 65536, 00:19:50.372 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:50.372 "assigned_rate_limits": { 00:19:50.372 "rw_ios_per_sec": 0, 00:19:50.372 "rw_mbytes_per_sec": 0, 00:19:50.372 "r_mbytes_per_sec": 0, 00:19:50.372 "w_mbytes_per_sec": 0 00:19:50.372 }, 00:19:50.372 "claimed": true, 00:19:50.372 "claim_type": "exclusive_write", 00:19:50.372 "zoned": false, 00:19:50.372 "supported_io_types": { 00:19:50.372 "read": true, 00:19:50.372 "write": true, 00:19:50.372 "unmap": true, 00:19:50.372 "flush": true, 00:19:50.372 "reset": true, 00:19:50.372 "nvme_admin": false, 00:19:50.372 "nvme_io": false, 00:19:50.372 "nvme_io_md": false, 00:19:50.372 "write_zeroes": true, 00:19:50.372 "zcopy": true, 00:19:50.372 "get_zone_info": false, 00:19:50.372 "zone_management": false, 00:19:50.372 "zone_append": false, 00:19:50.372 "compare": false, 00:19:50.372 "compare_and_write": false, 00:19:50.372 "abort": true, 00:19:50.372 "seek_hole": false, 00:19:50.372 "seek_data": false, 00:19:50.372 "copy": true, 00:19:50.372 "nvme_iov_md": false 00:19:50.372 }, 00:19:50.372 "memory_domains": [ 00:19:50.372 { 00:19:50.372 "dma_device_id": "system", 00:19:50.372 "dma_device_type": 1 00:19:50.372 }, 00:19:50.372 { 00:19:50.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.372 "dma_device_type": 2 00:19:50.372 } 00:19:50.372 ], 00:19:50.372 "driver_specific": {} 00:19:50.372 } 00:19:50.372 ] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.372 "name": "Existed_Raid", 00:19:50.372 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:50.372 "strip_size_kb": 64, 00:19:50.372 "state": "configuring", 00:19:50.372 "raid_level": "raid5f", 00:19:50.372 "superblock": true, 00:19:50.372 "num_base_bdevs": 4, 00:19:50.372 "num_base_bdevs_discovered": 3, 00:19:50.372 "num_base_bdevs_operational": 4, 00:19:50.372 "base_bdevs_list": [ 00:19:50.372 { 00:19:50.372 "name": "BaseBdev1", 00:19:50.372 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:50.372 "is_configured": true, 00:19:50.372 "data_offset": 2048, 00:19:50.372 "data_size": 63488 00:19:50.372 }, 00:19:50.372 { 00:19:50.372 "name": null, 00:19:50.372 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:50.372 "is_configured": false, 00:19:50.372 "data_offset": 0, 00:19:50.372 "data_size": 63488 00:19:50.372 }, 00:19:50.372 { 00:19:50.372 "name": "BaseBdev3", 00:19:50.372 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:50.372 "is_configured": true, 00:19:50.372 "data_offset": 2048, 00:19:50.372 "data_size": 63488 00:19:50.372 }, 00:19:50.372 { 00:19:50.372 "name": "BaseBdev4", 00:19:50.372 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:50.372 "is_configured": true, 00:19:50.372 "data_offset": 2048, 00:19:50.372 "data_size": 63488 00:19:50.372 } 00:19:50.372 ] 00:19:50.372 }' 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.372 07:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 [2024-11-20 07:16:33.060826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.941 "name": "Existed_Raid", 00:19:50.941 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:50.941 "strip_size_kb": 64, 00:19:50.941 "state": "configuring", 00:19:50.941 "raid_level": "raid5f", 00:19:50.941 "superblock": true, 00:19:50.941 "num_base_bdevs": 4, 00:19:50.941 "num_base_bdevs_discovered": 2, 00:19:50.941 "num_base_bdevs_operational": 4, 00:19:50.941 "base_bdevs_list": [ 00:19:50.941 { 00:19:50.941 "name": "BaseBdev1", 00:19:50.941 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:50.941 "is_configured": true, 00:19:50.941 "data_offset": 2048, 00:19:50.941 "data_size": 63488 00:19:50.941 }, 00:19:50.941 { 00:19:50.941 "name": null, 00:19:50.941 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:50.941 "is_configured": false, 00:19:50.941 "data_offset": 0, 00:19:50.941 "data_size": 63488 00:19:50.941 }, 00:19:50.941 { 00:19:50.941 "name": null, 00:19:50.941 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:50.941 "is_configured": false, 00:19:50.941 "data_offset": 0, 00:19:50.941 "data_size": 63488 00:19:50.941 }, 00:19:50.941 { 00:19:50.941 "name": "BaseBdev4", 00:19:50.941 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:50.941 "is_configured": true, 00:19:50.941 "data_offset": 2048, 00:19:50.941 "data_size": 63488 00:19:50.941 } 00:19:50.941 ] 00:19:50.941 }' 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.941 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 [2024-11-20 07:16:33.576034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:51.509 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.510 "name": "Existed_Raid", 00:19:51.510 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:51.510 "strip_size_kb": 64, 00:19:51.510 "state": "configuring", 00:19:51.510 "raid_level": "raid5f", 00:19:51.510 "superblock": true, 00:19:51.510 "num_base_bdevs": 4, 00:19:51.510 "num_base_bdevs_discovered": 3, 00:19:51.510 "num_base_bdevs_operational": 4, 00:19:51.510 "base_bdevs_list": [ 00:19:51.510 { 00:19:51.510 "name": "BaseBdev1", 00:19:51.510 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:51.510 "is_configured": true, 00:19:51.510 "data_offset": 2048, 00:19:51.510 "data_size": 63488 00:19:51.510 }, 00:19:51.510 { 00:19:51.510 "name": null, 00:19:51.510 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:51.510 "is_configured": false, 00:19:51.510 "data_offset": 0, 00:19:51.510 "data_size": 63488 00:19:51.510 }, 00:19:51.510 { 00:19:51.510 "name": "BaseBdev3", 00:19:51.510 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:51.510 "is_configured": true, 00:19:51.510 "data_offset": 2048, 00:19:51.510 "data_size": 63488 00:19:51.510 }, 00:19:51.510 { 00:19:51.510 "name": "BaseBdev4", 00:19:51.510 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:51.510 "is_configured": true, 00:19:51.510 "data_offset": 2048, 00:19:51.510 "data_size": 63488 00:19:51.510 } 00:19:51.510 ] 00:19:51.510 }' 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.510 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.768 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.768 07:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:51.768 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.768 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.768 07:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.768 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:51.768 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:51.768 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.768 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.768 [2024-11-20 07:16:34.015328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.027 "name": "Existed_Raid", 00:19:52.027 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:52.027 "strip_size_kb": 64, 00:19:52.027 "state": "configuring", 00:19:52.027 "raid_level": "raid5f", 00:19:52.027 "superblock": true, 00:19:52.027 "num_base_bdevs": 4, 00:19:52.027 "num_base_bdevs_discovered": 2, 00:19:52.027 "num_base_bdevs_operational": 4, 00:19:52.027 "base_bdevs_list": [ 00:19:52.027 { 00:19:52.027 "name": null, 00:19:52.027 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:52.027 "is_configured": false, 00:19:52.027 "data_offset": 0, 00:19:52.027 "data_size": 63488 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "name": null, 00:19:52.027 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:52.027 "is_configured": false, 00:19:52.027 "data_offset": 0, 00:19:52.027 "data_size": 63488 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "name": "BaseBdev3", 00:19:52.027 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:52.027 "is_configured": true, 00:19:52.027 "data_offset": 2048, 00:19:52.027 "data_size": 63488 00:19:52.027 }, 00:19:52.027 { 00:19:52.027 "name": "BaseBdev4", 00:19:52.027 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:52.027 "is_configured": true, 00:19:52.027 "data_offset": 2048, 00:19:52.027 "data_size": 63488 00:19:52.027 } 00:19:52.027 ] 00:19:52.027 }' 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.027 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.596 [2024-11-20 07:16:34.666701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.596 "name": "Existed_Raid", 00:19:52.596 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:52.596 "strip_size_kb": 64, 00:19:52.596 "state": "configuring", 00:19:52.596 "raid_level": "raid5f", 00:19:52.596 "superblock": true, 00:19:52.596 "num_base_bdevs": 4, 00:19:52.596 "num_base_bdevs_discovered": 3, 00:19:52.596 "num_base_bdevs_operational": 4, 00:19:52.596 "base_bdevs_list": [ 00:19:52.596 { 00:19:52.596 "name": null, 00:19:52.596 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:52.596 "is_configured": false, 00:19:52.596 "data_offset": 0, 00:19:52.596 "data_size": 63488 00:19:52.596 }, 00:19:52.596 { 00:19:52.596 "name": "BaseBdev2", 00:19:52.596 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:52.596 "is_configured": true, 00:19:52.596 "data_offset": 2048, 00:19:52.596 "data_size": 63488 00:19:52.596 }, 00:19:52.596 { 00:19:52.596 "name": "BaseBdev3", 00:19:52.596 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:52.596 "is_configured": true, 00:19:52.596 "data_offset": 2048, 00:19:52.596 "data_size": 63488 00:19:52.596 }, 00:19:52.596 { 00:19:52.596 "name": "BaseBdev4", 00:19:52.596 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:52.596 "is_configured": true, 00:19:52.596 "data_offset": 2048, 00:19:52.596 "data_size": 63488 00:19:52.596 } 00:19:52.596 ] 00:19:52.596 }' 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.596 07:16:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59f98e3f-5b40-456f-b876-1b7597e93eee 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.166 [2024-11-20 07:16:35.297283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:53.166 [2024-11-20 07:16:35.297645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:53.166 [2024-11-20 07:16:35.297663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:53.166 [2024-11-20 07:16:35.297970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:53.166 NewBaseBdev 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.166 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.166 [2024-11-20 07:16:35.306912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:53.166 [2024-11-20 07:16:35.306941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:53.166 [2024-11-20 07:16:35.307245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.167 [ 00:19:53.167 { 00:19:53.167 "name": "NewBaseBdev", 00:19:53.167 "aliases": [ 00:19:53.167 "59f98e3f-5b40-456f-b876-1b7597e93eee" 00:19:53.167 ], 00:19:53.167 "product_name": "Malloc disk", 00:19:53.167 "block_size": 512, 00:19:53.167 "num_blocks": 65536, 00:19:53.167 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:53.167 "assigned_rate_limits": { 00:19:53.167 "rw_ios_per_sec": 0, 00:19:53.167 "rw_mbytes_per_sec": 0, 00:19:53.167 "r_mbytes_per_sec": 0, 00:19:53.167 "w_mbytes_per_sec": 0 00:19:53.167 }, 00:19:53.167 "claimed": true, 00:19:53.167 "claim_type": "exclusive_write", 00:19:53.167 "zoned": false, 00:19:53.167 "supported_io_types": { 00:19:53.167 "read": true, 00:19:53.167 "write": true, 00:19:53.167 "unmap": true, 00:19:53.167 "flush": true, 00:19:53.167 "reset": true, 00:19:53.167 "nvme_admin": false, 00:19:53.167 "nvme_io": false, 00:19:53.167 "nvme_io_md": false, 00:19:53.167 "write_zeroes": true, 00:19:53.167 "zcopy": true, 00:19:53.167 "get_zone_info": false, 00:19:53.167 "zone_management": false, 00:19:53.167 "zone_append": false, 00:19:53.167 "compare": false, 00:19:53.167 "compare_and_write": false, 00:19:53.167 "abort": true, 00:19:53.167 "seek_hole": false, 00:19:53.167 "seek_data": false, 00:19:53.167 "copy": true, 00:19:53.167 "nvme_iov_md": false 00:19:53.167 }, 00:19:53.167 "memory_domains": [ 00:19:53.167 { 00:19:53.167 "dma_device_id": "system", 00:19:53.167 "dma_device_type": 1 00:19:53.167 }, 00:19:53.167 { 00:19:53.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.167 "dma_device_type": 2 00:19:53.167 } 00:19:53.167 ], 00:19:53.167 "driver_specific": {} 00:19:53.167 } 00:19:53.167 ] 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.167 "name": "Existed_Raid", 00:19:53.167 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:53.167 "strip_size_kb": 64, 00:19:53.167 "state": "online", 00:19:53.167 "raid_level": "raid5f", 00:19:53.167 "superblock": true, 00:19:53.167 "num_base_bdevs": 4, 00:19:53.167 "num_base_bdevs_discovered": 4, 00:19:53.167 "num_base_bdevs_operational": 4, 00:19:53.167 "base_bdevs_list": [ 00:19:53.167 { 00:19:53.167 "name": "NewBaseBdev", 00:19:53.167 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:53.167 "is_configured": true, 00:19:53.167 "data_offset": 2048, 00:19:53.167 "data_size": 63488 00:19:53.167 }, 00:19:53.167 { 00:19:53.167 "name": "BaseBdev2", 00:19:53.167 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:53.167 "is_configured": true, 00:19:53.167 "data_offset": 2048, 00:19:53.167 "data_size": 63488 00:19:53.167 }, 00:19:53.167 { 00:19:53.167 "name": "BaseBdev3", 00:19:53.167 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:53.167 "is_configured": true, 00:19:53.167 "data_offset": 2048, 00:19:53.167 "data_size": 63488 00:19:53.167 }, 00:19:53.167 { 00:19:53.167 "name": "BaseBdev4", 00:19:53.167 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:53.167 "is_configured": true, 00:19:53.167 "data_offset": 2048, 00:19:53.167 "data_size": 63488 00:19:53.167 } 00:19:53.167 ] 00:19:53.167 }' 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.167 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.739 [2024-11-20 07:16:35.848482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.739 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.739 "name": "Existed_Raid", 00:19:53.739 "aliases": [ 00:19:53.739 "2e1100b8-0549-4518-b5a2-d6955e6b421b" 00:19:53.739 ], 00:19:53.739 "product_name": "Raid Volume", 00:19:53.739 "block_size": 512, 00:19:53.739 "num_blocks": 190464, 00:19:53.739 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:53.739 "assigned_rate_limits": { 00:19:53.739 "rw_ios_per_sec": 0, 00:19:53.739 "rw_mbytes_per_sec": 0, 00:19:53.739 "r_mbytes_per_sec": 0, 00:19:53.739 "w_mbytes_per_sec": 0 00:19:53.739 }, 00:19:53.739 "claimed": false, 00:19:53.739 "zoned": false, 00:19:53.739 "supported_io_types": { 00:19:53.739 "read": true, 00:19:53.739 "write": true, 00:19:53.739 "unmap": false, 00:19:53.739 "flush": false, 00:19:53.739 "reset": true, 00:19:53.739 "nvme_admin": false, 00:19:53.739 "nvme_io": false, 00:19:53.739 "nvme_io_md": false, 00:19:53.739 "write_zeroes": true, 00:19:53.739 "zcopy": false, 00:19:53.739 "get_zone_info": false, 00:19:53.739 "zone_management": false, 00:19:53.739 "zone_append": false, 00:19:53.739 "compare": false, 00:19:53.739 "compare_and_write": false, 00:19:53.739 "abort": false, 00:19:53.739 "seek_hole": false, 00:19:53.739 "seek_data": false, 00:19:53.739 "copy": false, 00:19:53.739 "nvme_iov_md": false 00:19:53.739 }, 00:19:53.739 "driver_specific": { 00:19:53.739 "raid": { 00:19:53.739 "uuid": "2e1100b8-0549-4518-b5a2-d6955e6b421b", 00:19:53.739 "strip_size_kb": 64, 00:19:53.739 "state": "online", 00:19:53.739 "raid_level": "raid5f", 00:19:53.739 "superblock": true, 00:19:53.739 "num_base_bdevs": 4, 00:19:53.739 "num_base_bdevs_discovered": 4, 00:19:53.739 "num_base_bdevs_operational": 4, 00:19:53.739 "base_bdevs_list": [ 00:19:53.739 { 00:19:53.739 "name": "NewBaseBdev", 00:19:53.739 "uuid": "59f98e3f-5b40-456f-b876-1b7597e93eee", 00:19:53.739 "is_configured": true, 00:19:53.739 "data_offset": 2048, 00:19:53.739 "data_size": 63488 00:19:53.739 }, 00:19:53.739 { 00:19:53.739 "name": "BaseBdev2", 00:19:53.739 "uuid": "85dc28e0-fd95-4086-a366-d0124495f4f0", 00:19:53.739 "is_configured": true, 00:19:53.739 "data_offset": 2048, 00:19:53.739 "data_size": 63488 00:19:53.739 }, 00:19:53.739 { 00:19:53.739 "name": "BaseBdev3", 00:19:53.739 "uuid": "fad26966-f9d4-4bb2-9ec4-367ca2b448e8", 00:19:53.739 "is_configured": true, 00:19:53.739 "data_offset": 2048, 00:19:53.739 "data_size": 63488 00:19:53.739 }, 00:19:53.739 { 00:19:53.739 "name": "BaseBdev4", 00:19:53.739 "uuid": "b3570576-fa91-4d9b-980f-a2d9bc174b9c", 00:19:53.739 "is_configured": true, 00:19:53.739 "data_offset": 2048, 00:19:53.740 "data_size": 63488 00:19:53.740 } 00:19:53.740 ] 00:19:53.740 } 00:19:53.740 } 00:19:53.740 }' 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:53.740 BaseBdev2 00:19:53.740 BaseBdev3 00:19:53.740 BaseBdev4' 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.740 07:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 [2024-11-20 07:16:36.207595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.999 [2024-11-20 07:16:36.207644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.999 [2024-11-20 07:16:36.207754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.999 [2024-11-20 07:16:36.208116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.999 [2024-11-20 07:16:36.208130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83945 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83945 ']' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83945 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83945 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.999 killing process with pid 83945 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83945' 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83945 00:19:53.999 07:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83945 00:19:53.999 [2024-11-20 07:16:36.246676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.567 [2024-11-20 07:16:36.716495] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.949 07:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:55.949 00:19:55.949 real 0m12.656s 00:19:55.949 user 0m19.908s 00:19:55.949 sys 0m2.165s 00:19:55.949 07:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.949 07:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.949 ************************************ 00:19:55.949 END TEST raid5f_state_function_test_sb 00:19:55.949 ************************************ 00:19:55.949 07:16:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:55.949 07:16:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:55.949 07:16:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.949 07:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.949 ************************************ 00:19:55.949 START TEST raid5f_superblock_test 00:19:55.949 ************************************ 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84618 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84618 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84618 ']' 00:19:55.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.949 07:16:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.949 [2024-11-20 07:16:38.191137] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:55.949 [2024-11-20 07:16:38.191275] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84618 ] 00:19:56.209 [2024-11-20 07:16:38.374115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.469 [2024-11-20 07:16:38.511751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.778 [2024-11-20 07:16:38.757402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.778 [2024-11-20 07:16:38.757489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.039 malloc1 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.039 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.040 [2024-11-20 07:16:39.146495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:57.040 [2024-11-20 07:16:39.146646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.040 [2024-11-20 07:16:39.146699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:57.040 [2024-11-20 07:16:39.146734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.040 [2024-11-20 07:16:39.149673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.040 [2024-11-20 07:16:39.149819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:57.040 pt1 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.040 malloc2 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.040 [2024-11-20 07:16:39.213854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.040 [2024-11-20 07:16:39.214022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.040 [2024-11-20 07:16:39.214059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:57.040 [2024-11-20 07:16:39.214071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.040 [2024-11-20 07:16:39.216635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.040 [2024-11-20 07:16:39.216682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.040 pt2 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:57.040 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.041 malloc3 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.041 [2024-11-20 07:16:39.288003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:57.041 [2024-11-20 07:16:39.288138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.041 [2024-11-20 07:16:39.288197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:57.041 [2024-11-20 07:16:39.288235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.041 [2024-11-20 07:16:39.290812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.041 [2024-11-20 07:16:39.290903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:57.041 pt3 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.041 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.310 malloc4 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.310 [2024-11-20 07:16:39.350009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.310 [2024-11-20 07:16:39.350160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.310 [2024-11-20 07:16:39.350206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:57.310 [2024-11-20 07:16:39.350239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.310 [2024-11-20 07:16:39.352678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.310 [2024-11-20 07:16:39.352775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.310 pt4 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.310 [2024-11-20 07:16:39.362009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:57.310 [2024-11-20 07:16:39.364329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.310 [2024-11-20 07:16:39.364420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:57.310 [2024-11-20 07:16:39.364492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:57.310 [2024-11-20 07:16:39.364733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:57.310 [2024-11-20 07:16:39.364768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:57.310 [2024-11-20 07:16:39.365086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:57.310 [2024-11-20 07:16:39.373574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:57.310 [2024-11-20 07:16:39.373653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:57.310 [2024-11-20 07:16:39.373997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.310 "name": "raid_bdev1", 00:19:57.310 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:57.310 "strip_size_kb": 64, 00:19:57.310 "state": "online", 00:19:57.310 "raid_level": "raid5f", 00:19:57.310 "superblock": true, 00:19:57.310 "num_base_bdevs": 4, 00:19:57.310 "num_base_bdevs_discovered": 4, 00:19:57.310 "num_base_bdevs_operational": 4, 00:19:57.310 "base_bdevs_list": [ 00:19:57.310 { 00:19:57.310 "name": "pt1", 00:19:57.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.310 "is_configured": true, 00:19:57.310 "data_offset": 2048, 00:19:57.310 "data_size": 63488 00:19:57.310 }, 00:19:57.310 { 00:19:57.310 "name": "pt2", 00:19:57.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.310 "is_configured": true, 00:19:57.310 "data_offset": 2048, 00:19:57.310 "data_size": 63488 00:19:57.310 }, 00:19:57.310 { 00:19:57.310 "name": "pt3", 00:19:57.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.310 "is_configured": true, 00:19:57.310 "data_offset": 2048, 00:19:57.310 "data_size": 63488 00:19:57.310 }, 00:19:57.310 { 00:19:57.310 "name": "pt4", 00:19:57.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.310 "is_configured": true, 00:19:57.310 "data_offset": 2048, 00:19:57.310 "data_size": 63488 00:19:57.310 } 00:19:57.310 ] 00:19:57.310 }' 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.310 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.570 [2024-11-20 07:16:39.791746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.570 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.570 "name": "raid_bdev1", 00:19:57.570 "aliases": [ 00:19:57.570 "e90eeec4-1594-4630-b9d2-709598096369" 00:19:57.570 ], 00:19:57.570 "product_name": "Raid Volume", 00:19:57.570 "block_size": 512, 00:19:57.570 "num_blocks": 190464, 00:19:57.570 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:57.570 "assigned_rate_limits": { 00:19:57.570 "rw_ios_per_sec": 0, 00:19:57.570 "rw_mbytes_per_sec": 0, 00:19:57.570 "r_mbytes_per_sec": 0, 00:19:57.571 "w_mbytes_per_sec": 0 00:19:57.571 }, 00:19:57.571 "claimed": false, 00:19:57.571 "zoned": false, 00:19:57.571 "supported_io_types": { 00:19:57.571 "read": true, 00:19:57.571 "write": true, 00:19:57.571 "unmap": false, 00:19:57.571 "flush": false, 00:19:57.571 "reset": true, 00:19:57.571 "nvme_admin": false, 00:19:57.571 "nvme_io": false, 00:19:57.571 "nvme_io_md": false, 00:19:57.571 "write_zeroes": true, 00:19:57.571 "zcopy": false, 00:19:57.571 "get_zone_info": false, 00:19:57.571 "zone_management": false, 00:19:57.571 "zone_append": false, 00:19:57.571 "compare": false, 00:19:57.571 "compare_and_write": false, 00:19:57.571 "abort": false, 00:19:57.571 "seek_hole": false, 00:19:57.571 "seek_data": false, 00:19:57.571 "copy": false, 00:19:57.571 "nvme_iov_md": false 00:19:57.571 }, 00:19:57.571 "driver_specific": { 00:19:57.571 "raid": { 00:19:57.571 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:57.571 "strip_size_kb": 64, 00:19:57.571 "state": "online", 00:19:57.571 "raid_level": "raid5f", 00:19:57.571 "superblock": true, 00:19:57.571 "num_base_bdevs": 4, 00:19:57.571 "num_base_bdevs_discovered": 4, 00:19:57.571 "num_base_bdevs_operational": 4, 00:19:57.571 "base_bdevs_list": [ 00:19:57.571 { 00:19:57.571 "name": "pt1", 00:19:57.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.571 "is_configured": true, 00:19:57.571 "data_offset": 2048, 00:19:57.571 "data_size": 63488 00:19:57.571 }, 00:19:57.571 { 00:19:57.571 "name": "pt2", 00:19:57.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.571 "is_configured": true, 00:19:57.571 "data_offset": 2048, 00:19:57.571 "data_size": 63488 00:19:57.571 }, 00:19:57.571 { 00:19:57.571 "name": "pt3", 00:19:57.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.571 "is_configured": true, 00:19:57.571 "data_offset": 2048, 00:19:57.571 "data_size": 63488 00:19:57.571 }, 00:19:57.571 { 00:19:57.571 "name": "pt4", 00:19:57.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.571 "is_configured": true, 00:19:57.571 "data_offset": 2048, 00:19:57.571 "data_size": 63488 00:19:57.571 } 00:19:57.571 ] 00:19:57.571 } 00:19:57.571 } 00:19:57.571 }' 00:19:57.571 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.831 pt2 00:19:57.831 pt3 00:19:57.831 pt4' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.831 07:16:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [2024-11-20 07:16:40.127170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e90eeec4-1594-4630-b9d2-709598096369 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e90eeec4-1594-4630-b9d2-709598096369 ']' 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [2024-11-20 07:16:40.158894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.092 [2024-11-20 07:16:40.158931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.092 [2024-11-20 07:16:40.159032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.092 [2024-11-20 07:16:40.159130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.092 [2024-11-20 07:16:40.159147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.093 [2024-11-20 07:16:40.322666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:58.093 [2024-11-20 07:16:40.324871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:58.093 [2024-11-20 07:16:40.324932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:58.093 [2024-11-20 07:16:40.324972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:58.093 [2024-11-20 07:16:40.325043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:58.093 [2024-11-20 07:16:40.325114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:58.093 [2024-11-20 07:16:40.325145] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:58.093 [2024-11-20 07:16:40.325168] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:58.093 [2024-11-20 07:16:40.325185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.093 [2024-11-20 07:16:40.325197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:58.093 request: 00:19:58.093 { 00:19:58.093 "name": "raid_bdev1", 00:19:58.093 "raid_level": "raid5f", 00:19:58.093 "base_bdevs": [ 00:19:58.093 "malloc1", 00:19:58.093 "malloc2", 00:19:58.093 "malloc3", 00:19:58.093 "malloc4" 00:19:58.093 ], 00:19:58.093 "strip_size_kb": 64, 00:19:58.093 "superblock": false, 00:19:58.093 "method": "bdev_raid_create", 00:19:58.093 "req_id": 1 00:19:58.093 } 00:19:58.093 Got JSON-RPC error response 00:19:58.093 response: 00:19:58.093 { 00:19:58.093 "code": -17, 00:19:58.093 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:58.093 } 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:58.093 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.353 [2024-11-20 07:16:40.386519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:58.353 [2024-11-20 07:16:40.386669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.353 [2024-11-20 07:16:40.386721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:58.353 [2024-11-20 07:16:40.386757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.353 [2024-11-20 07:16:40.389314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.353 [2024-11-20 07:16:40.389428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:58.353 [2024-11-20 07:16:40.389558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:58.353 [2024-11-20 07:16:40.389673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.353 pt1 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.353 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.353 "name": "raid_bdev1", 00:19:58.353 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:58.353 "strip_size_kb": 64, 00:19:58.353 "state": "configuring", 00:19:58.353 "raid_level": "raid5f", 00:19:58.353 "superblock": true, 00:19:58.353 "num_base_bdevs": 4, 00:19:58.353 "num_base_bdevs_discovered": 1, 00:19:58.353 "num_base_bdevs_operational": 4, 00:19:58.353 "base_bdevs_list": [ 00:19:58.353 { 00:19:58.353 "name": "pt1", 00:19:58.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.353 "is_configured": true, 00:19:58.353 "data_offset": 2048, 00:19:58.353 "data_size": 63488 00:19:58.353 }, 00:19:58.353 { 00:19:58.353 "name": null, 00:19:58.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.353 "is_configured": false, 00:19:58.353 "data_offset": 2048, 00:19:58.353 "data_size": 63488 00:19:58.353 }, 00:19:58.353 { 00:19:58.353 "name": null, 00:19:58.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.353 "is_configured": false, 00:19:58.353 "data_offset": 2048, 00:19:58.353 "data_size": 63488 00:19:58.353 }, 00:19:58.353 { 00:19:58.353 "name": null, 00:19:58.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.353 "is_configured": false, 00:19:58.353 "data_offset": 2048, 00:19:58.353 "data_size": 63488 00:19:58.353 } 00:19:58.353 ] 00:19:58.353 }' 00:19:58.354 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.354 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.613 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:58.613 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.613 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.613 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.901 [2024-11-20 07:16:40.881727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.901 [2024-11-20 07:16:40.881825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.901 [2024-11-20 07:16:40.881850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:58.901 [2024-11-20 07:16:40.881864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.901 [2024-11-20 07:16:40.882435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.901 [2024-11-20 07:16:40.882468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.901 [2024-11-20 07:16:40.882582] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.901 [2024-11-20 07:16:40.882614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.901 pt2 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.901 [2024-11-20 07:16:40.893751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.901 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.901 "name": "raid_bdev1", 00:19:58.901 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:58.901 "strip_size_kb": 64, 00:19:58.901 "state": "configuring", 00:19:58.901 "raid_level": "raid5f", 00:19:58.901 "superblock": true, 00:19:58.901 "num_base_bdevs": 4, 00:19:58.901 "num_base_bdevs_discovered": 1, 00:19:58.901 "num_base_bdevs_operational": 4, 00:19:58.901 "base_bdevs_list": [ 00:19:58.901 { 00:19:58.901 "name": "pt1", 00:19:58.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.902 "is_configured": true, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": null, 00:19:58.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.902 "is_configured": false, 00:19:58.902 "data_offset": 0, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": null, 00:19:58.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.902 "is_configured": false, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": null, 00:19:58.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.902 "is_configured": false, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 } 00:19:58.902 ] 00:19:58.902 }' 00:19:58.902 07:16:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.902 07:16:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.161 [2024-11-20 07:16:41.416924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:59.161 [2024-11-20 07:16:41.417088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.161 [2024-11-20 07:16:41.417134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:59.161 [2024-11-20 07:16:41.417174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.161 [2024-11-20 07:16:41.417747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.161 [2024-11-20 07:16:41.417818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:59.161 [2024-11-20 07:16:41.417947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:59.161 [2024-11-20 07:16:41.418007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:59.161 pt2 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.161 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.161 [2024-11-20 07:16:41.424916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:59.161 [2024-11-20 07:16:41.425038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.161 [2024-11-20 07:16:41.425091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:59.161 [2024-11-20 07:16:41.425124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.420 [2024-11-20 07:16:41.425678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.420 [2024-11-20 07:16:41.425752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:59.420 [2024-11-20 07:16:41.425882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:59.420 [2024-11-20 07:16:41.425940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:59.420 pt3 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.420 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.421 [2024-11-20 07:16:41.436926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:59.421 [2024-11-20 07:16:41.437018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.421 [2024-11-20 07:16:41.437046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:59.421 [2024-11-20 07:16:41.437055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.421 [2024-11-20 07:16:41.437610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.421 [2024-11-20 07:16:41.437643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:59.421 [2024-11-20 07:16:41.437747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:59.421 [2024-11-20 07:16:41.437774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:59.421 [2024-11-20 07:16:41.437954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:59.421 [2024-11-20 07:16:41.437981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:59.421 [2024-11-20 07:16:41.438251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:59.421 pt4 00:19:59.421 [2024-11-20 07:16:41.446999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:59.421 [2024-11-20 07:16:41.447064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:59.421 [2024-11-20 07:16:41.447391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.421 "name": "raid_bdev1", 00:19:59.421 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:59.421 "strip_size_kb": 64, 00:19:59.421 "state": "online", 00:19:59.421 "raid_level": "raid5f", 00:19:59.421 "superblock": true, 00:19:59.421 "num_base_bdevs": 4, 00:19:59.421 "num_base_bdevs_discovered": 4, 00:19:59.421 "num_base_bdevs_operational": 4, 00:19:59.421 "base_bdevs_list": [ 00:19:59.421 { 00:19:59.421 "name": "pt1", 00:19:59.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.421 "is_configured": true, 00:19:59.421 "data_offset": 2048, 00:19:59.421 "data_size": 63488 00:19:59.421 }, 00:19:59.421 { 00:19:59.421 "name": "pt2", 00:19:59.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.421 "is_configured": true, 00:19:59.421 "data_offset": 2048, 00:19:59.421 "data_size": 63488 00:19:59.421 }, 00:19:59.421 { 00:19:59.421 "name": "pt3", 00:19:59.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.421 "is_configured": true, 00:19:59.421 "data_offset": 2048, 00:19:59.421 "data_size": 63488 00:19:59.421 }, 00:19:59.421 { 00:19:59.421 "name": "pt4", 00:19:59.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.421 "is_configured": true, 00:19:59.421 "data_offset": 2048, 00:19:59.421 "data_size": 63488 00:19:59.421 } 00:19:59.421 ] 00:19:59.421 }' 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.421 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.680 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.680 [2024-11-20 07:16:41.929022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.938 07:16:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.938 "name": "raid_bdev1", 00:19:59.938 "aliases": [ 00:19:59.938 "e90eeec4-1594-4630-b9d2-709598096369" 00:19:59.938 ], 00:19:59.938 "product_name": "Raid Volume", 00:19:59.938 "block_size": 512, 00:19:59.938 "num_blocks": 190464, 00:19:59.938 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:59.938 "assigned_rate_limits": { 00:19:59.938 "rw_ios_per_sec": 0, 00:19:59.938 "rw_mbytes_per_sec": 0, 00:19:59.938 "r_mbytes_per_sec": 0, 00:19:59.938 "w_mbytes_per_sec": 0 00:19:59.938 }, 00:19:59.938 "claimed": false, 00:19:59.938 "zoned": false, 00:19:59.938 "supported_io_types": { 00:19:59.938 "read": true, 00:19:59.938 "write": true, 00:19:59.938 "unmap": false, 00:19:59.938 "flush": false, 00:19:59.938 "reset": true, 00:19:59.938 "nvme_admin": false, 00:19:59.938 "nvme_io": false, 00:19:59.938 "nvme_io_md": false, 00:19:59.938 "write_zeroes": true, 00:19:59.938 "zcopy": false, 00:19:59.938 "get_zone_info": false, 00:19:59.938 "zone_management": false, 00:19:59.938 "zone_append": false, 00:19:59.938 "compare": false, 00:19:59.938 "compare_and_write": false, 00:19:59.938 "abort": false, 00:19:59.938 "seek_hole": false, 00:19:59.938 "seek_data": false, 00:19:59.938 "copy": false, 00:19:59.938 "nvme_iov_md": false 00:19:59.938 }, 00:19:59.938 "driver_specific": { 00:19:59.938 "raid": { 00:19:59.938 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:19:59.938 "strip_size_kb": 64, 00:19:59.938 "state": "online", 00:19:59.938 "raid_level": "raid5f", 00:19:59.938 "superblock": true, 00:19:59.938 "num_base_bdevs": 4, 00:19:59.938 "num_base_bdevs_discovered": 4, 00:19:59.938 "num_base_bdevs_operational": 4, 00:19:59.938 "base_bdevs_list": [ 00:19:59.938 { 00:19:59.938 "name": "pt1", 00:19:59.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 2048, 00:19:59.938 "data_size": 63488 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": "pt2", 00:19:59.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 2048, 00:19:59.938 "data_size": 63488 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": "pt3", 00:19:59.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 2048, 00:19:59.938 "data_size": 63488 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": "pt4", 00:19:59.938 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 2048, 00:19:59.938 "data_size": 63488 00:19:59.938 } 00:19:59.938 ] 00:19:59.938 } 00:19:59.938 } 00:19:59.938 }' 00:19:59.938 07:16:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:59.938 pt2 00:19:59.938 pt3 00:19:59.938 pt4' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:00.198 [2024-11-20 07:16:42.288459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e90eeec4-1594-4630-b9d2-709598096369 '!=' e90eeec4-1594-4630-b9d2-709598096369 ']' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.198 [2024-11-20 07:16:42.340198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.198 "name": "raid_bdev1", 00:20:00.198 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:00.198 "strip_size_kb": 64, 00:20:00.198 "state": "online", 00:20:00.198 "raid_level": "raid5f", 00:20:00.198 "superblock": true, 00:20:00.198 "num_base_bdevs": 4, 00:20:00.198 "num_base_bdevs_discovered": 3, 00:20:00.198 "num_base_bdevs_operational": 3, 00:20:00.198 "base_bdevs_list": [ 00:20:00.198 { 00:20:00.198 "name": null, 00:20:00.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.198 "is_configured": false, 00:20:00.198 "data_offset": 0, 00:20:00.198 "data_size": 63488 00:20:00.198 }, 00:20:00.198 { 00:20:00.198 "name": "pt2", 00:20:00.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.198 "is_configured": true, 00:20:00.198 "data_offset": 2048, 00:20:00.198 "data_size": 63488 00:20:00.198 }, 00:20:00.198 { 00:20:00.198 "name": "pt3", 00:20:00.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.198 "is_configured": true, 00:20:00.198 "data_offset": 2048, 00:20:00.198 "data_size": 63488 00:20:00.198 }, 00:20:00.198 { 00:20:00.198 "name": "pt4", 00:20:00.198 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:00.198 "is_configured": true, 00:20:00.198 "data_offset": 2048, 00:20:00.198 "data_size": 63488 00:20:00.198 } 00:20:00.198 ] 00:20:00.198 }' 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.198 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 [2024-11-20 07:16:42.827316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.765 [2024-11-20 07:16:42.827430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.765 [2024-11-20 07:16:42.827559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.765 [2024-11-20 07:16:42.827699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.765 [2024-11-20 07:16:42.827755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.765 [2024-11-20 07:16:42.919178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.765 [2024-11-20 07:16:42.919319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.765 [2024-11-20 07:16:42.919392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:00.765 [2024-11-20 07:16:42.919432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.765 [2024-11-20 07:16:42.922026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.765 [2024-11-20 07:16:42.922135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.765 [2024-11-20 07:16:42.922287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:00.765 [2024-11-20 07:16:42.922399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.765 pt2 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.765 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.766 "name": "raid_bdev1", 00:20:00.766 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:00.766 "strip_size_kb": 64, 00:20:00.766 "state": "configuring", 00:20:00.766 "raid_level": "raid5f", 00:20:00.766 "superblock": true, 00:20:00.766 "num_base_bdevs": 4, 00:20:00.766 "num_base_bdevs_discovered": 1, 00:20:00.766 "num_base_bdevs_operational": 3, 00:20:00.766 "base_bdevs_list": [ 00:20:00.766 { 00:20:00.766 "name": null, 00:20:00.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.766 "is_configured": false, 00:20:00.766 "data_offset": 2048, 00:20:00.766 "data_size": 63488 00:20:00.766 }, 00:20:00.766 { 00:20:00.766 "name": "pt2", 00:20:00.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.766 "is_configured": true, 00:20:00.766 "data_offset": 2048, 00:20:00.766 "data_size": 63488 00:20:00.766 }, 00:20:00.766 { 00:20:00.766 "name": null, 00:20:00.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.766 "is_configured": false, 00:20:00.766 "data_offset": 2048, 00:20:00.766 "data_size": 63488 00:20:00.766 }, 00:20:00.766 { 00:20:00.766 "name": null, 00:20:00.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:00.766 "is_configured": false, 00:20:00.766 "data_offset": 2048, 00:20:00.766 "data_size": 63488 00:20:00.766 } 00:20:00.766 ] 00:20:00.766 }' 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.766 07:16:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 [2024-11-20 07:16:43.398403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:01.343 [2024-11-20 07:16:43.398492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.343 [2024-11-20 07:16:43.398528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:01.343 [2024-11-20 07:16:43.398544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.343 [2024-11-20 07:16:43.399131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.343 [2024-11-20 07:16:43.399172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:01.343 [2024-11-20 07:16:43.399281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:01.343 [2024-11-20 07:16:43.399315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:01.343 pt3 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.343 "name": "raid_bdev1", 00:20:01.343 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:01.343 "strip_size_kb": 64, 00:20:01.343 "state": "configuring", 00:20:01.343 "raid_level": "raid5f", 00:20:01.343 "superblock": true, 00:20:01.343 "num_base_bdevs": 4, 00:20:01.343 "num_base_bdevs_discovered": 2, 00:20:01.343 "num_base_bdevs_operational": 3, 00:20:01.343 "base_bdevs_list": [ 00:20:01.343 { 00:20:01.343 "name": null, 00:20:01.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.343 "is_configured": false, 00:20:01.343 "data_offset": 2048, 00:20:01.343 "data_size": 63488 00:20:01.343 }, 00:20:01.343 { 00:20:01.343 "name": "pt2", 00:20:01.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.343 "is_configured": true, 00:20:01.343 "data_offset": 2048, 00:20:01.343 "data_size": 63488 00:20:01.343 }, 00:20:01.343 { 00:20:01.343 "name": "pt3", 00:20:01.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.343 "is_configured": true, 00:20:01.343 "data_offset": 2048, 00:20:01.343 "data_size": 63488 00:20:01.343 }, 00:20:01.343 { 00:20:01.343 "name": null, 00:20:01.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:01.343 "is_configured": false, 00:20:01.343 "data_offset": 2048, 00:20:01.343 "data_size": 63488 00:20:01.343 } 00:20:01.343 ] 00:20:01.343 }' 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.343 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.603 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 [2024-11-20 07:16:43.865629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:01.603 [2024-11-20 07:16:43.865763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.603 [2024-11-20 07:16:43.865795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:01.603 [2024-11-20 07:16:43.865806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.603 [2024-11-20 07:16:43.866366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.603 [2024-11-20 07:16:43.866390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:01.603 [2024-11-20 07:16:43.866496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:01.603 [2024-11-20 07:16:43.866530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:01.603 [2024-11-20 07:16:43.866701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:01.603 [2024-11-20 07:16:43.866711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:01.862 [2024-11-20 07:16:43.867000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:01.862 [2024-11-20 07:16:43.875631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:01.862 [2024-11-20 07:16:43.875688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:01.862 [2024-11-20 07:16:43.876122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.862 pt4 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.862 "name": "raid_bdev1", 00:20:01.862 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:01.862 "strip_size_kb": 64, 00:20:01.862 "state": "online", 00:20:01.862 "raid_level": "raid5f", 00:20:01.862 "superblock": true, 00:20:01.862 "num_base_bdevs": 4, 00:20:01.862 "num_base_bdevs_discovered": 3, 00:20:01.862 "num_base_bdevs_operational": 3, 00:20:01.862 "base_bdevs_list": [ 00:20:01.862 { 00:20:01.862 "name": null, 00:20:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.862 "is_configured": false, 00:20:01.862 "data_offset": 2048, 00:20:01.862 "data_size": 63488 00:20:01.862 }, 00:20:01.862 { 00:20:01.862 "name": "pt2", 00:20:01.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.862 "is_configured": true, 00:20:01.862 "data_offset": 2048, 00:20:01.862 "data_size": 63488 00:20:01.862 }, 00:20:01.862 { 00:20:01.862 "name": "pt3", 00:20:01.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.862 "is_configured": true, 00:20:01.862 "data_offset": 2048, 00:20:01.862 "data_size": 63488 00:20:01.862 }, 00:20:01.862 { 00:20:01.862 "name": "pt4", 00:20:01.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:01.862 "is_configured": true, 00:20:01.862 "data_offset": 2048, 00:20:01.862 "data_size": 63488 00:20:01.862 } 00:20:01.862 ] 00:20:01.862 }' 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.862 07:16:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 [2024-11-20 07:16:44.355077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.121 [2024-11-20 07:16:44.355118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.121 [2024-11-20 07:16:44.355225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.121 [2024-11-20 07:16:44.355323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.121 [2024-11-20 07:16:44.355354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:02.121 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.381 [2024-11-20 07:16:44.426990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.381 [2024-11-20 07:16:44.427201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.381 [2024-11-20 07:16:44.427248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:02.381 [2024-11-20 07:16:44.427264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.381 [2024-11-20 07:16:44.430245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.381 [2024-11-20 07:16:44.430407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.381 [2024-11-20 07:16:44.430556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:02.381 [2024-11-20 07:16:44.430639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:02.381 [2024-11-20 07:16:44.430857] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:02.381 [2024-11-20 07:16:44.430875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.381 [2024-11-20 07:16:44.430895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:02.381 [2024-11-20 07:16:44.430990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.381 [2024-11-20 07:16:44.431219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:02.381 pt1 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.381 "name": "raid_bdev1", 00:20:02.381 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:02.381 "strip_size_kb": 64, 00:20:02.381 "state": "configuring", 00:20:02.381 "raid_level": "raid5f", 00:20:02.381 "superblock": true, 00:20:02.381 "num_base_bdevs": 4, 00:20:02.381 "num_base_bdevs_discovered": 2, 00:20:02.381 "num_base_bdevs_operational": 3, 00:20:02.381 "base_bdevs_list": [ 00:20:02.381 { 00:20:02.381 "name": null, 00:20:02.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.381 "is_configured": false, 00:20:02.381 "data_offset": 2048, 00:20:02.381 "data_size": 63488 00:20:02.381 }, 00:20:02.381 { 00:20:02.381 "name": "pt2", 00:20:02.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.381 "is_configured": true, 00:20:02.381 "data_offset": 2048, 00:20:02.381 "data_size": 63488 00:20:02.381 }, 00:20:02.381 { 00:20:02.381 "name": "pt3", 00:20:02.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.381 "is_configured": true, 00:20:02.381 "data_offset": 2048, 00:20:02.381 "data_size": 63488 00:20:02.381 }, 00:20:02.381 { 00:20:02.381 "name": null, 00:20:02.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:02.381 "is_configured": false, 00:20:02.381 "data_offset": 2048, 00:20:02.381 "data_size": 63488 00:20:02.381 } 00:20:02.381 ] 00:20:02.381 }' 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.381 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.641 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:02.641 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:02.641 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.641 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.641 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.900 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:02.900 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:02.900 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.900 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.900 [2024-11-20 07:16:44.914261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:02.900 [2024-11-20 07:16:44.914469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.900 [2024-11-20 07:16:44.914565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:02.901 [2024-11-20 07:16:44.914627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.901 [2024-11-20 07:16:44.915287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.901 [2024-11-20 07:16:44.915400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:02.901 [2024-11-20 07:16:44.915566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:02.901 [2024-11-20 07:16:44.915665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:02.901 [2024-11-20 07:16:44.915934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:02.901 [2024-11-20 07:16:44.915993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:02.901 [2024-11-20 07:16:44.916412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:02.901 [2024-11-20 07:16:44.926470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:02.901 [2024-11-20 07:16:44.926593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:02.901 [2024-11-20 07:16:44.927057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.901 pt4 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.901 "name": "raid_bdev1", 00:20:02.901 "uuid": "e90eeec4-1594-4630-b9d2-709598096369", 00:20:02.901 "strip_size_kb": 64, 00:20:02.901 "state": "online", 00:20:02.901 "raid_level": "raid5f", 00:20:02.901 "superblock": true, 00:20:02.901 "num_base_bdevs": 4, 00:20:02.901 "num_base_bdevs_discovered": 3, 00:20:02.901 "num_base_bdevs_operational": 3, 00:20:02.901 "base_bdevs_list": [ 00:20:02.901 { 00:20:02.901 "name": null, 00:20:02.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.901 "is_configured": false, 00:20:02.901 "data_offset": 2048, 00:20:02.901 "data_size": 63488 00:20:02.901 }, 00:20:02.901 { 00:20:02.901 "name": "pt2", 00:20:02.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.901 "is_configured": true, 00:20:02.901 "data_offset": 2048, 00:20:02.901 "data_size": 63488 00:20:02.901 }, 00:20:02.901 { 00:20:02.901 "name": "pt3", 00:20:02.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.901 "is_configured": true, 00:20:02.901 "data_offset": 2048, 00:20:02.901 "data_size": 63488 00:20:02.901 }, 00:20:02.901 { 00:20:02.901 "name": "pt4", 00:20:02.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:02.901 "is_configured": true, 00:20:02.901 "data_offset": 2048, 00:20:02.901 "data_size": 63488 00:20:02.901 } 00:20:02.901 ] 00:20:02.901 }' 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.901 07:16:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.159 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:03.159 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:03.159 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.159 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.418 [2024-11-20 07:16:45.461680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e90eeec4-1594-4630-b9d2-709598096369 '!=' e90eeec4-1594-4630-b9d2-709598096369 ']' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84618 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84618 ']' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84618 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84618 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.418 killing process with pid 84618 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84618' 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84618 00:20:03.418 [2024-11-20 07:16:45.530061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.418 [2024-11-20 07:16:45.530202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.418 07:16:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84618 00:20:03.418 [2024-11-20 07:16:45.530300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.418 [2024-11-20 07:16:45.530315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:03.992 [2024-11-20 07:16:45.977863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.403 07:16:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:05.403 00:20:05.403 real 0m9.202s 00:20:05.403 user 0m14.374s 00:20:05.403 sys 0m1.584s 00:20:05.403 07:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.403 07:16:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.403 ************************************ 00:20:05.403 END TEST raid5f_superblock_test 00:20:05.403 ************************************ 00:20:05.403 07:16:47 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:05.403 07:16:47 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:05.403 07:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:05.403 07:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.403 07:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.403 ************************************ 00:20:05.403 START TEST raid5f_rebuild_test 00:20:05.403 ************************************ 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85109 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85109 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85109 ']' 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.403 07:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.403 [2024-11-20 07:16:47.489619] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:05.403 [2024-11-20 07:16:47.489899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85109 ] 00:20:05.403 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:05.403 Zero copy mechanism will not be used. 00:20:05.403 [2024-11-20 07:16:47.664429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.663 [2024-11-20 07:16:47.808814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.923 [2024-11-20 07:16:48.051395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.923 [2024-11-20 07:16:48.051571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 BaseBdev1_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 [2024-11-20 07:16:48.468947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:06.442 [2024-11-20 07:16:48.469044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.442 [2024-11-20 07:16:48.469072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.442 [2024-11-20 07:16:48.469086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.442 [2024-11-20 07:16:48.471726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.442 [2024-11-20 07:16:48.471839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.442 BaseBdev1 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 BaseBdev2_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 [2024-11-20 07:16:48.530703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:06.442 [2024-11-20 07:16:48.530796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.442 [2024-11-20 07:16:48.530821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:06.442 [2024-11-20 07:16:48.530836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.442 [2024-11-20 07:16:48.533532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.442 [2024-11-20 07:16:48.533668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:06.442 BaseBdev2 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 BaseBdev3_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 [2024-11-20 07:16:48.601315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:06.442 [2024-11-20 07:16:48.601411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.442 [2024-11-20 07:16:48.601442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:06.442 [2024-11-20 07:16:48.601456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.442 [2024-11-20 07:16:48.604020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.442 [2024-11-20 07:16:48.604073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:06.442 BaseBdev3 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 BaseBdev4_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 [2024-11-20 07:16:48.663230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:06.442 [2024-11-20 07:16:48.663318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.442 [2024-11-20 07:16:48.663365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:06.442 [2024-11-20 07:16:48.663380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.442 [2024-11-20 07:16:48.665884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.442 [2024-11-20 07:16:48.666036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:06.442 BaseBdev4 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.442 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.702 spare_malloc 00:20:06.702 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.702 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:06.702 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.702 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.703 spare_delay 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.703 [2024-11-20 07:16:48.738756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:06.703 [2024-11-20 07:16:48.738861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.703 [2024-11-20 07:16:48.738892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:06.703 [2024-11-20 07:16:48.738905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.703 [2024-11-20 07:16:48.741542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.703 [2024-11-20 07:16:48.741603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:06.703 spare 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.703 [2024-11-20 07:16:48.746763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.703 [2024-11-20 07:16:48.748970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.703 [2024-11-20 07:16:48.749154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.703 [2024-11-20 07:16:48.749237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:06.703 [2024-11-20 07:16:48.749393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:06.703 [2024-11-20 07:16:48.749412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:06.703 [2024-11-20 07:16:48.749787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:06.703 [2024-11-20 07:16:48.758484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:06.703 [2024-11-20 07:16:48.758523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:06.703 [2024-11-20 07:16:48.758861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.703 "name": "raid_bdev1", 00:20:06.703 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:06.703 "strip_size_kb": 64, 00:20:06.703 "state": "online", 00:20:06.703 "raid_level": "raid5f", 00:20:06.703 "superblock": false, 00:20:06.703 "num_base_bdevs": 4, 00:20:06.703 "num_base_bdevs_discovered": 4, 00:20:06.703 "num_base_bdevs_operational": 4, 00:20:06.703 "base_bdevs_list": [ 00:20:06.703 { 00:20:06.703 "name": "BaseBdev1", 00:20:06.703 "uuid": "104be122-e24d-57bd-894a-00f427190ca5", 00:20:06.703 "is_configured": true, 00:20:06.703 "data_offset": 0, 00:20:06.703 "data_size": 65536 00:20:06.703 }, 00:20:06.703 { 00:20:06.703 "name": "BaseBdev2", 00:20:06.703 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:06.703 "is_configured": true, 00:20:06.703 "data_offset": 0, 00:20:06.703 "data_size": 65536 00:20:06.703 }, 00:20:06.703 { 00:20:06.703 "name": "BaseBdev3", 00:20:06.703 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:06.703 "is_configured": true, 00:20:06.703 "data_offset": 0, 00:20:06.703 "data_size": 65536 00:20:06.703 }, 00:20:06.703 { 00:20:06.703 "name": "BaseBdev4", 00:20:06.703 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:06.703 "is_configured": true, 00:20:06.703 "data_offset": 0, 00:20:06.703 "data_size": 65536 00:20:06.703 } 00:20:06.703 ] 00:20:06.703 }' 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.703 07:16:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.962 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:06.962 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.963 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.963 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.963 [2024-11-20 07:16:49.216180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.221 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:07.480 [2024-11-20 07:16:49.587408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:07.480 /dev/nbd0 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:07.480 1+0 records in 00:20:07.480 1+0 records out 00:20:07.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395759 s, 10.3 MB/s 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:07.480 07:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:08.417 512+0 records in 00:20:08.417 512+0 records out 00:20:08.417 100663296 bytes (101 MB, 96 MiB) copied, 0.755232 s, 133 MB/s 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:08.417 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:08.675 [2024-11-20 07:16:50.685078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.675 [2024-11-20 07:16:50.708538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.675 "name": "raid_bdev1", 00:20:08.675 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:08.675 "strip_size_kb": 64, 00:20:08.675 "state": "online", 00:20:08.675 "raid_level": "raid5f", 00:20:08.675 "superblock": false, 00:20:08.675 "num_base_bdevs": 4, 00:20:08.675 "num_base_bdevs_discovered": 3, 00:20:08.675 "num_base_bdevs_operational": 3, 00:20:08.675 "base_bdevs_list": [ 00:20:08.675 { 00:20:08.675 "name": null, 00:20:08.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.675 "is_configured": false, 00:20:08.675 "data_offset": 0, 00:20:08.675 "data_size": 65536 00:20:08.675 }, 00:20:08.675 { 00:20:08.675 "name": "BaseBdev2", 00:20:08.675 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:08.675 "is_configured": true, 00:20:08.675 "data_offset": 0, 00:20:08.675 "data_size": 65536 00:20:08.675 }, 00:20:08.675 { 00:20:08.675 "name": "BaseBdev3", 00:20:08.675 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:08.675 "is_configured": true, 00:20:08.675 "data_offset": 0, 00:20:08.675 "data_size": 65536 00:20:08.675 }, 00:20:08.675 { 00:20:08.675 "name": "BaseBdev4", 00:20:08.675 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:08.675 "is_configured": true, 00:20:08.675 "data_offset": 0, 00:20:08.675 "data_size": 65536 00:20:08.675 } 00:20:08.675 ] 00:20:08.675 }' 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.675 07:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 07:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.936 07:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 07:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 [2024-11-20 07:16:51.151836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.936 [2024-11-20 07:16:51.172022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:08.936 07:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 07:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:08.936 [2024-11-20 07:16:51.184970] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.315 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.315 "name": "raid_bdev1", 00:20:10.315 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:10.315 "strip_size_kb": 64, 00:20:10.315 "state": "online", 00:20:10.315 "raid_level": "raid5f", 00:20:10.315 "superblock": false, 00:20:10.315 "num_base_bdevs": 4, 00:20:10.315 "num_base_bdevs_discovered": 4, 00:20:10.315 "num_base_bdevs_operational": 4, 00:20:10.315 "process": { 00:20:10.315 "type": "rebuild", 00:20:10.315 "target": "spare", 00:20:10.315 "progress": { 00:20:10.315 "blocks": 17280, 00:20:10.315 "percent": 8 00:20:10.315 } 00:20:10.315 }, 00:20:10.315 "base_bdevs_list": [ 00:20:10.316 { 00:20:10.316 "name": "spare", 00:20:10.316 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev2", 00:20:10.316 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev3", 00:20:10.316 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev4", 00:20:10.316 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 } 00:20:10.316 ] 00:20:10.316 }' 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.316 [2024-11-20 07:16:52.324917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.316 [2024-11-20 07:16:52.395876] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:10.316 [2024-11-20 07:16:52.395984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.316 [2024-11-20 07:16:52.396009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.316 [2024-11-20 07:16:52.396022] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.316 "name": "raid_bdev1", 00:20:10.316 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:10.316 "strip_size_kb": 64, 00:20:10.316 "state": "online", 00:20:10.316 "raid_level": "raid5f", 00:20:10.316 "superblock": false, 00:20:10.316 "num_base_bdevs": 4, 00:20:10.316 "num_base_bdevs_discovered": 3, 00:20:10.316 "num_base_bdevs_operational": 3, 00:20:10.316 "base_bdevs_list": [ 00:20:10.316 { 00:20:10.316 "name": null, 00:20:10.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.316 "is_configured": false, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev2", 00:20:10.316 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev3", 00:20:10.316 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 }, 00:20:10.316 { 00:20:10.316 "name": "BaseBdev4", 00:20:10.316 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:10.316 "is_configured": true, 00:20:10.316 "data_offset": 0, 00:20:10.316 "data_size": 65536 00:20:10.316 } 00:20:10.316 ] 00:20:10.316 }' 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.316 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.885 "name": "raid_bdev1", 00:20:10.885 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:10.885 "strip_size_kb": 64, 00:20:10.885 "state": "online", 00:20:10.885 "raid_level": "raid5f", 00:20:10.885 "superblock": false, 00:20:10.885 "num_base_bdevs": 4, 00:20:10.885 "num_base_bdevs_discovered": 3, 00:20:10.885 "num_base_bdevs_operational": 3, 00:20:10.885 "base_bdevs_list": [ 00:20:10.885 { 00:20:10.885 "name": null, 00:20:10.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.885 "is_configured": false, 00:20:10.885 "data_offset": 0, 00:20:10.885 "data_size": 65536 00:20:10.885 }, 00:20:10.885 { 00:20:10.885 "name": "BaseBdev2", 00:20:10.885 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:10.885 "is_configured": true, 00:20:10.885 "data_offset": 0, 00:20:10.885 "data_size": 65536 00:20:10.885 }, 00:20:10.885 { 00:20:10.885 "name": "BaseBdev3", 00:20:10.885 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:10.885 "is_configured": true, 00:20:10.885 "data_offset": 0, 00:20:10.885 "data_size": 65536 00:20:10.885 }, 00:20:10.885 { 00:20:10.885 "name": "BaseBdev4", 00:20:10.885 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:10.885 "is_configured": true, 00:20:10.885 "data_offset": 0, 00:20:10.885 "data_size": 65536 00:20:10.885 } 00:20:10.885 ] 00:20:10.885 }' 00:20:10.885 07:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.885 07:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.885 07:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.885 07:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.885 07:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:10.886 07:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.886 07:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.886 [2024-11-20 07:16:53.099485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.886 [2024-11-20 07:16:53.119213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:10.886 07:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.886 07:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:10.886 [2024-11-20 07:16:53.131102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.265 "name": "raid_bdev1", 00:20:12.265 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:12.265 "strip_size_kb": 64, 00:20:12.265 "state": "online", 00:20:12.265 "raid_level": "raid5f", 00:20:12.265 "superblock": false, 00:20:12.265 "num_base_bdevs": 4, 00:20:12.265 "num_base_bdevs_discovered": 4, 00:20:12.265 "num_base_bdevs_operational": 4, 00:20:12.265 "process": { 00:20:12.265 "type": "rebuild", 00:20:12.265 "target": "spare", 00:20:12.265 "progress": { 00:20:12.265 "blocks": 17280, 00:20:12.265 "percent": 8 00:20:12.265 } 00:20:12.265 }, 00:20:12.265 "base_bdevs_list": [ 00:20:12.265 { 00:20:12.265 "name": "spare", 00:20:12.265 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev2", 00:20:12.265 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev3", 00:20:12.265 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev4", 00:20:12.265 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 } 00:20:12.265 ] 00:20:12.265 }' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=650 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.265 "name": "raid_bdev1", 00:20:12.265 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:12.265 "strip_size_kb": 64, 00:20:12.265 "state": "online", 00:20:12.265 "raid_level": "raid5f", 00:20:12.265 "superblock": false, 00:20:12.265 "num_base_bdevs": 4, 00:20:12.265 "num_base_bdevs_discovered": 4, 00:20:12.265 "num_base_bdevs_operational": 4, 00:20:12.265 "process": { 00:20:12.265 "type": "rebuild", 00:20:12.265 "target": "spare", 00:20:12.265 "progress": { 00:20:12.265 "blocks": 21120, 00:20:12.265 "percent": 10 00:20:12.265 } 00:20:12.265 }, 00:20:12.265 "base_bdevs_list": [ 00:20:12.265 { 00:20:12.265 "name": "spare", 00:20:12.265 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev2", 00:20:12.265 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev3", 00:20:12.265 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 }, 00:20:12.265 { 00:20:12.265 "name": "BaseBdev4", 00:20:12.265 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:12.265 "is_configured": true, 00:20:12.265 "data_offset": 0, 00:20:12.265 "data_size": 65536 00:20:12.265 } 00:20:12.265 ] 00:20:12.265 }' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.265 07:16:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.203 07:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.462 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.462 "name": "raid_bdev1", 00:20:13.462 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:13.462 "strip_size_kb": 64, 00:20:13.462 "state": "online", 00:20:13.462 "raid_level": "raid5f", 00:20:13.462 "superblock": false, 00:20:13.462 "num_base_bdevs": 4, 00:20:13.462 "num_base_bdevs_discovered": 4, 00:20:13.462 "num_base_bdevs_operational": 4, 00:20:13.462 "process": { 00:20:13.462 "type": "rebuild", 00:20:13.462 "target": "spare", 00:20:13.463 "progress": { 00:20:13.463 "blocks": 42240, 00:20:13.463 "percent": 21 00:20:13.463 } 00:20:13.463 }, 00:20:13.463 "base_bdevs_list": [ 00:20:13.463 { 00:20:13.463 "name": "spare", 00:20:13.463 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:13.463 "is_configured": true, 00:20:13.463 "data_offset": 0, 00:20:13.463 "data_size": 65536 00:20:13.463 }, 00:20:13.463 { 00:20:13.463 "name": "BaseBdev2", 00:20:13.463 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:13.463 "is_configured": true, 00:20:13.463 "data_offset": 0, 00:20:13.463 "data_size": 65536 00:20:13.463 }, 00:20:13.463 { 00:20:13.463 "name": "BaseBdev3", 00:20:13.463 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:13.463 "is_configured": true, 00:20:13.463 "data_offset": 0, 00:20:13.463 "data_size": 65536 00:20:13.463 }, 00:20:13.463 { 00:20:13.463 "name": "BaseBdev4", 00:20:13.463 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:13.463 "is_configured": true, 00:20:13.463 "data_offset": 0, 00:20:13.463 "data_size": 65536 00:20:13.463 } 00:20:13.463 ] 00:20:13.463 }' 00:20:13.463 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.463 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.463 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.463 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.463 07:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.445 "name": "raid_bdev1", 00:20:14.445 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:14.445 "strip_size_kb": 64, 00:20:14.445 "state": "online", 00:20:14.445 "raid_level": "raid5f", 00:20:14.445 "superblock": false, 00:20:14.445 "num_base_bdevs": 4, 00:20:14.445 "num_base_bdevs_discovered": 4, 00:20:14.445 "num_base_bdevs_operational": 4, 00:20:14.445 "process": { 00:20:14.445 "type": "rebuild", 00:20:14.445 "target": "spare", 00:20:14.445 "progress": { 00:20:14.445 "blocks": 65280, 00:20:14.445 "percent": 33 00:20:14.445 } 00:20:14.445 }, 00:20:14.445 "base_bdevs_list": [ 00:20:14.445 { 00:20:14.445 "name": "spare", 00:20:14.445 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:14.445 "is_configured": true, 00:20:14.445 "data_offset": 0, 00:20:14.445 "data_size": 65536 00:20:14.445 }, 00:20:14.445 { 00:20:14.445 "name": "BaseBdev2", 00:20:14.445 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:14.445 "is_configured": true, 00:20:14.445 "data_offset": 0, 00:20:14.445 "data_size": 65536 00:20:14.445 }, 00:20:14.445 { 00:20:14.445 "name": "BaseBdev3", 00:20:14.445 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:14.445 "is_configured": true, 00:20:14.445 "data_offset": 0, 00:20:14.445 "data_size": 65536 00:20:14.445 }, 00:20:14.445 { 00:20:14.445 "name": "BaseBdev4", 00:20:14.445 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:14.445 "is_configured": true, 00:20:14.445 "data_offset": 0, 00:20:14.445 "data_size": 65536 00:20:14.445 } 00:20:14.445 ] 00:20:14.445 }' 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.445 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.704 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.704 07:16:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.638 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.638 "name": "raid_bdev1", 00:20:15.638 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:15.638 "strip_size_kb": 64, 00:20:15.638 "state": "online", 00:20:15.638 "raid_level": "raid5f", 00:20:15.638 "superblock": false, 00:20:15.638 "num_base_bdevs": 4, 00:20:15.638 "num_base_bdevs_discovered": 4, 00:20:15.638 "num_base_bdevs_operational": 4, 00:20:15.638 "process": { 00:20:15.638 "type": "rebuild", 00:20:15.638 "target": "spare", 00:20:15.638 "progress": { 00:20:15.638 "blocks": 86400, 00:20:15.638 "percent": 43 00:20:15.638 } 00:20:15.638 }, 00:20:15.638 "base_bdevs_list": [ 00:20:15.638 { 00:20:15.638 "name": "spare", 00:20:15.638 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:15.638 "is_configured": true, 00:20:15.638 "data_offset": 0, 00:20:15.638 "data_size": 65536 00:20:15.638 }, 00:20:15.638 { 00:20:15.638 "name": "BaseBdev2", 00:20:15.639 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:15.639 "is_configured": true, 00:20:15.639 "data_offset": 0, 00:20:15.639 "data_size": 65536 00:20:15.639 }, 00:20:15.639 { 00:20:15.639 "name": "BaseBdev3", 00:20:15.639 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:15.639 "is_configured": true, 00:20:15.639 "data_offset": 0, 00:20:15.639 "data_size": 65536 00:20:15.639 }, 00:20:15.639 { 00:20:15.639 "name": "BaseBdev4", 00:20:15.639 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:15.639 "is_configured": true, 00:20:15.639 "data_offset": 0, 00:20:15.639 "data_size": 65536 00:20:15.639 } 00:20:15.639 ] 00:20:15.639 }' 00:20:15.639 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.639 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.639 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.899 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.899 07:16:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.856 "name": "raid_bdev1", 00:20:16.856 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:16.856 "strip_size_kb": 64, 00:20:16.856 "state": "online", 00:20:16.856 "raid_level": "raid5f", 00:20:16.856 "superblock": false, 00:20:16.856 "num_base_bdevs": 4, 00:20:16.856 "num_base_bdevs_discovered": 4, 00:20:16.856 "num_base_bdevs_operational": 4, 00:20:16.856 "process": { 00:20:16.856 "type": "rebuild", 00:20:16.856 "target": "spare", 00:20:16.856 "progress": { 00:20:16.856 "blocks": 109440, 00:20:16.856 "percent": 55 00:20:16.856 } 00:20:16.856 }, 00:20:16.856 "base_bdevs_list": [ 00:20:16.856 { 00:20:16.856 "name": "spare", 00:20:16.856 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:16.856 "is_configured": true, 00:20:16.856 "data_offset": 0, 00:20:16.856 "data_size": 65536 00:20:16.856 }, 00:20:16.856 { 00:20:16.856 "name": "BaseBdev2", 00:20:16.856 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:16.856 "is_configured": true, 00:20:16.856 "data_offset": 0, 00:20:16.856 "data_size": 65536 00:20:16.856 }, 00:20:16.856 { 00:20:16.856 "name": "BaseBdev3", 00:20:16.856 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:16.856 "is_configured": true, 00:20:16.856 "data_offset": 0, 00:20:16.856 "data_size": 65536 00:20:16.856 }, 00:20:16.856 { 00:20:16.856 "name": "BaseBdev4", 00:20:16.856 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:16.856 "is_configured": true, 00:20:16.856 "data_offset": 0, 00:20:16.856 "data_size": 65536 00:20:16.856 } 00:20:16.856 ] 00:20:16.856 }' 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.856 07:16:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.856 07:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.856 07:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.856 07:16:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.821 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.821 "name": "raid_bdev1", 00:20:17.821 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:17.821 "strip_size_kb": 64, 00:20:17.821 "state": "online", 00:20:17.822 "raid_level": "raid5f", 00:20:17.822 "superblock": false, 00:20:17.822 "num_base_bdevs": 4, 00:20:17.822 "num_base_bdevs_discovered": 4, 00:20:17.822 "num_base_bdevs_operational": 4, 00:20:17.822 "process": { 00:20:17.822 "type": "rebuild", 00:20:17.822 "target": "spare", 00:20:17.822 "progress": { 00:20:17.822 "blocks": 130560, 00:20:17.822 "percent": 66 00:20:17.822 } 00:20:17.822 }, 00:20:17.822 "base_bdevs_list": [ 00:20:17.822 { 00:20:17.822 "name": "spare", 00:20:17.822 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:17.822 "is_configured": true, 00:20:17.822 "data_offset": 0, 00:20:17.822 "data_size": 65536 00:20:17.822 }, 00:20:17.822 { 00:20:17.822 "name": "BaseBdev2", 00:20:17.822 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:17.822 "is_configured": true, 00:20:17.822 "data_offset": 0, 00:20:17.822 "data_size": 65536 00:20:17.822 }, 00:20:17.822 { 00:20:17.822 "name": "BaseBdev3", 00:20:17.822 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:17.822 "is_configured": true, 00:20:17.822 "data_offset": 0, 00:20:17.822 "data_size": 65536 00:20:17.822 }, 00:20:17.822 { 00:20:17.822 "name": "BaseBdev4", 00:20:17.822 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:17.822 "is_configured": true, 00:20:17.822 "data_offset": 0, 00:20:17.822 "data_size": 65536 00:20:17.822 } 00:20:17.822 ] 00:20:17.822 }' 00:20:17.822 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.081 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.081 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.081 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.081 07:17:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.018 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.018 "name": "raid_bdev1", 00:20:19.018 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:19.018 "strip_size_kb": 64, 00:20:19.018 "state": "online", 00:20:19.018 "raid_level": "raid5f", 00:20:19.018 "superblock": false, 00:20:19.018 "num_base_bdevs": 4, 00:20:19.018 "num_base_bdevs_discovered": 4, 00:20:19.018 "num_base_bdevs_operational": 4, 00:20:19.018 "process": { 00:20:19.018 "type": "rebuild", 00:20:19.018 "target": "spare", 00:20:19.018 "progress": { 00:20:19.018 "blocks": 151680, 00:20:19.018 "percent": 77 00:20:19.018 } 00:20:19.018 }, 00:20:19.018 "base_bdevs_list": [ 00:20:19.018 { 00:20:19.018 "name": "spare", 00:20:19.018 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:19.018 "is_configured": true, 00:20:19.018 "data_offset": 0, 00:20:19.018 "data_size": 65536 00:20:19.018 }, 00:20:19.018 { 00:20:19.018 "name": "BaseBdev2", 00:20:19.018 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:19.018 "is_configured": true, 00:20:19.018 "data_offset": 0, 00:20:19.018 "data_size": 65536 00:20:19.018 }, 00:20:19.018 { 00:20:19.018 "name": "BaseBdev3", 00:20:19.018 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:19.018 "is_configured": true, 00:20:19.018 "data_offset": 0, 00:20:19.018 "data_size": 65536 00:20:19.018 }, 00:20:19.018 { 00:20:19.018 "name": "BaseBdev4", 00:20:19.018 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:19.018 "is_configured": true, 00:20:19.019 "data_offset": 0, 00:20:19.019 "data_size": 65536 00:20:19.019 } 00:20:19.019 ] 00:20:19.019 }' 00:20:19.019 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.019 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.019 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.277 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.277 07:17:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.211 "name": "raid_bdev1", 00:20:20.211 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:20.211 "strip_size_kb": 64, 00:20:20.211 "state": "online", 00:20:20.211 "raid_level": "raid5f", 00:20:20.211 "superblock": false, 00:20:20.211 "num_base_bdevs": 4, 00:20:20.211 "num_base_bdevs_discovered": 4, 00:20:20.211 "num_base_bdevs_operational": 4, 00:20:20.211 "process": { 00:20:20.211 "type": "rebuild", 00:20:20.211 "target": "spare", 00:20:20.211 "progress": { 00:20:20.211 "blocks": 174720, 00:20:20.211 "percent": 88 00:20:20.211 } 00:20:20.211 }, 00:20:20.211 "base_bdevs_list": [ 00:20:20.211 { 00:20:20.211 "name": "spare", 00:20:20.211 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:20.211 "is_configured": true, 00:20:20.211 "data_offset": 0, 00:20:20.211 "data_size": 65536 00:20:20.211 }, 00:20:20.211 { 00:20:20.211 "name": "BaseBdev2", 00:20:20.211 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:20.211 "is_configured": true, 00:20:20.211 "data_offset": 0, 00:20:20.211 "data_size": 65536 00:20:20.211 }, 00:20:20.211 { 00:20:20.211 "name": "BaseBdev3", 00:20:20.211 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:20.211 "is_configured": true, 00:20:20.211 "data_offset": 0, 00:20:20.211 "data_size": 65536 00:20:20.211 }, 00:20:20.211 { 00:20:20.211 "name": "BaseBdev4", 00:20:20.211 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:20.211 "is_configured": true, 00:20:20.211 "data_offset": 0, 00:20:20.211 "data_size": 65536 00:20:20.211 } 00:20:20.211 ] 00:20:20.211 }' 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.211 07:17:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.588 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.588 "name": "raid_bdev1", 00:20:21.588 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:21.588 "strip_size_kb": 64, 00:20:21.588 "state": "online", 00:20:21.588 "raid_level": "raid5f", 00:20:21.588 "superblock": false, 00:20:21.588 "num_base_bdevs": 4, 00:20:21.588 "num_base_bdevs_discovered": 4, 00:20:21.588 "num_base_bdevs_operational": 4, 00:20:21.588 "process": { 00:20:21.588 "type": "rebuild", 00:20:21.588 "target": "spare", 00:20:21.588 "progress": { 00:20:21.588 "blocks": 195840, 00:20:21.588 "percent": 99 00:20:21.588 } 00:20:21.588 }, 00:20:21.588 "base_bdevs_list": [ 00:20:21.588 { 00:20:21.588 "name": "spare", 00:20:21.588 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:21.588 "is_configured": true, 00:20:21.588 "data_offset": 0, 00:20:21.588 "data_size": 65536 00:20:21.588 }, 00:20:21.588 { 00:20:21.588 "name": "BaseBdev2", 00:20:21.588 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:21.588 "is_configured": true, 00:20:21.588 "data_offset": 0, 00:20:21.588 "data_size": 65536 00:20:21.588 }, 00:20:21.588 { 00:20:21.588 "name": "BaseBdev3", 00:20:21.588 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:21.588 "is_configured": true, 00:20:21.589 "data_offset": 0, 00:20:21.589 "data_size": 65536 00:20:21.589 }, 00:20:21.589 { 00:20:21.589 "name": "BaseBdev4", 00:20:21.589 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:21.589 "is_configured": true, 00:20:21.589 "data_offset": 0, 00:20:21.589 "data_size": 65536 00:20:21.589 } 00:20:21.589 ] 00:20:21.589 }' 00:20:21.589 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.589 [2024-11-20 07:17:03.515540] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:21.589 [2024-11-20 07:17:03.515626] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:21.589 [2024-11-20 07:17:03.515687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.589 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.589 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.589 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.589 07:17:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.526 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.526 "name": "raid_bdev1", 00:20:22.526 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:22.526 "strip_size_kb": 64, 00:20:22.526 "state": "online", 00:20:22.526 "raid_level": "raid5f", 00:20:22.527 "superblock": false, 00:20:22.527 "num_base_bdevs": 4, 00:20:22.527 "num_base_bdevs_discovered": 4, 00:20:22.527 "num_base_bdevs_operational": 4, 00:20:22.527 "base_bdevs_list": [ 00:20:22.527 { 00:20:22.527 "name": "spare", 00:20:22.527 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev2", 00:20:22.527 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev3", 00:20:22.527 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev4", 00:20:22.527 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 } 00:20:22.527 ] 00:20:22.527 }' 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.527 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.527 "name": "raid_bdev1", 00:20:22.527 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:22.527 "strip_size_kb": 64, 00:20:22.527 "state": "online", 00:20:22.527 "raid_level": "raid5f", 00:20:22.527 "superblock": false, 00:20:22.527 "num_base_bdevs": 4, 00:20:22.527 "num_base_bdevs_discovered": 4, 00:20:22.527 "num_base_bdevs_operational": 4, 00:20:22.527 "base_bdevs_list": [ 00:20:22.527 { 00:20:22.527 "name": "spare", 00:20:22.527 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev2", 00:20:22.527 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev3", 00:20:22.527 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 }, 00:20:22.527 { 00:20:22.527 "name": "BaseBdev4", 00:20:22.527 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:22.527 "is_configured": true, 00:20:22.527 "data_offset": 0, 00:20:22.527 "data_size": 65536 00:20:22.527 } 00:20:22.527 ] 00:20:22.527 }' 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.786 "name": "raid_bdev1", 00:20:22.786 "uuid": "46e902d3-de70-4362-bd28-ad29061c961e", 00:20:22.786 "strip_size_kb": 64, 00:20:22.786 "state": "online", 00:20:22.786 "raid_level": "raid5f", 00:20:22.786 "superblock": false, 00:20:22.786 "num_base_bdevs": 4, 00:20:22.786 "num_base_bdevs_discovered": 4, 00:20:22.786 "num_base_bdevs_operational": 4, 00:20:22.786 "base_bdevs_list": [ 00:20:22.786 { 00:20:22.786 "name": "spare", 00:20:22.786 "uuid": "f80475de-1350-57c3-be33-3dcbe30898df", 00:20:22.786 "is_configured": true, 00:20:22.786 "data_offset": 0, 00:20:22.786 "data_size": 65536 00:20:22.786 }, 00:20:22.786 { 00:20:22.786 "name": "BaseBdev2", 00:20:22.786 "uuid": "94848fe1-32c0-578c-9085-1d8a8fe7049d", 00:20:22.786 "is_configured": true, 00:20:22.786 "data_offset": 0, 00:20:22.786 "data_size": 65536 00:20:22.786 }, 00:20:22.786 { 00:20:22.786 "name": "BaseBdev3", 00:20:22.786 "uuid": "ccd17225-8b26-5105-9659-6cd079cf297e", 00:20:22.786 "is_configured": true, 00:20:22.786 "data_offset": 0, 00:20:22.786 "data_size": 65536 00:20:22.786 }, 00:20:22.786 { 00:20:22.786 "name": "BaseBdev4", 00:20:22.786 "uuid": "63a0d54d-4daa-5c45-a23f-d76941dc07a8", 00:20:22.786 "is_configured": true, 00:20:22.786 "data_offset": 0, 00:20:22.786 "data_size": 65536 00:20:22.786 } 00:20:22.786 ] 00:20:22.786 }' 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.786 07:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.044 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:23.044 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.044 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.316 [2024-11-20 07:17:05.310284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.316 [2024-11-20 07:17:05.310471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.316 [2024-11-20 07:17:05.310600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.316 [2024-11-20 07:17:05.310725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.316 [2024-11-20 07:17:05.310740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.316 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:23.614 /dev/nbd0 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.614 1+0 records in 00:20:23.614 1+0 records out 00:20:23.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592547 s, 6.9 MB/s 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.614 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:23.875 /dev/nbd1 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.875 1+0 records in 00:20:23.875 1+0 records out 00:20:23.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495172 s, 8.3 MB/s 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.875 07:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.134 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.393 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85109 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85109 ']' 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85109 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85109 00:20:24.652 killing process with pid 85109 00:20:24.652 Received shutdown signal, test time was about 60.000000 seconds 00:20:24.652 00:20:24.652 Latency(us) 00:20:24.652 [2024-11-20T07:17:06.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.652 [2024-11-20T07:17:06.917Z] =================================================================================================================== 00:20:24.652 [2024-11-20T07:17:06.917Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.652 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.653 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.653 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85109' 00:20:24.653 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85109 00:20:24.653 [2024-11-20 07:17:06.721248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.653 07:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85109 00:20:25.220 [2024-11-20 07:17:07.260740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.598 ************************************ 00:20:26.598 END TEST raid5f_rebuild_test 00:20:26.598 ************************************ 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:26.598 00:20:26.598 real 0m21.095s 00:20:26.598 user 0m25.364s 00:20:26.598 sys 0m2.556s 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.598 07:17:08 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:26.598 07:17:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:26.598 07:17:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.598 07:17:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.598 ************************************ 00:20:26.598 START TEST raid5f_rebuild_test_sb 00:20:26.598 ************************************ 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85641 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85641 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85641 ']' 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.598 07:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.598 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.598 Zero copy mechanism will not be used. 00:20:26.598 [2024-11-20 07:17:08.618214] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:26.598 [2024-11-20 07:17:08.618329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85641 ] 00:20:26.598 [2024-11-20 07:17:08.798891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.857 [2024-11-20 07:17:08.920241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.115 [2024-11-20 07:17:09.139742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.115 [2024-11-20 07:17:09.139803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.374 BaseBdev1_malloc 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.374 [2024-11-20 07:17:09.526497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.374 [2024-11-20 07:17:09.526569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.374 [2024-11-20 07:17:09.526592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.374 [2024-11-20 07:17:09.526603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.374 [2024-11-20 07:17:09.528766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.374 [2024-11-20 07:17:09.528808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.374 BaseBdev1 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.374 BaseBdev2_malloc 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.374 [2024-11-20 07:17:09.585717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:27.374 [2024-11-20 07:17:09.585805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.374 [2024-11-20 07:17:09.585828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:27.374 [2024-11-20 07:17:09.585843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.374 [2024-11-20 07:17:09.588129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.374 [2024-11-20 07:17:09.588171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.374 BaseBdev2 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.374 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.632 BaseBdev3_malloc 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.632 [2024-11-20 07:17:09.660683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:27.632 [2024-11-20 07:17:09.660755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.632 [2024-11-20 07:17:09.660783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:27.632 [2024-11-20 07:17:09.660796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.632 [2024-11-20 07:17:09.663019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.632 [2024-11-20 07:17:09.663061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:27.632 BaseBdev3 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.632 BaseBdev4_malloc 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.632 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.632 [2024-11-20 07:17:09.722174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:27.632 [2024-11-20 07:17:09.722246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.632 [2024-11-20 07:17:09.722268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:27.632 [2024-11-20 07:17:09.722279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.632 [2024-11-20 07:17:09.724704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.632 [2024-11-20 07:17:09.724762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:27.632 BaseBdev4 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 spare_malloc 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 spare_delay 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 [2024-11-20 07:17:09.790148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.633 [2024-11-20 07:17:09.790282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.633 [2024-11-20 07:17:09.790307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:27.633 [2024-11-20 07:17:09.790318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.633 [2024-11-20 07:17:09.792409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.633 [2024-11-20 07:17:09.792448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.633 spare 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 [2024-11-20 07:17:09.802174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.633 [2024-11-20 07:17:09.803967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.633 [2024-11-20 07:17:09.804030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.633 [2024-11-20 07:17:09.804093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.633 [2024-11-20 07:17:09.804276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.633 [2024-11-20 07:17:09.804291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:27.633 [2024-11-20 07:17:09.804548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:27.633 [2024-11-20 07:17:09.811495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.633 [2024-11-20 07:17:09.811556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:27.633 [2024-11-20 07:17:09.811778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.633 "name": "raid_bdev1", 00:20:27.633 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:27.633 "strip_size_kb": 64, 00:20:27.633 "state": "online", 00:20:27.633 "raid_level": "raid5f", 00:20:27.633 "superblock": true, 00:20:27.633 "num_base_bdevs": 4, 00:20:27.633 "num_base_bdevs_discovered": 4, 00:20:27.633 "num_base_bdevs_operational": 4, 00:20:27.633 "base_bdevs_list": [ 00:20:27.633 { 00:20:27.633 "name": "BaseBdev1", 00:20:27.633 "uuid": "812a3227-a8ab-545f-9b73-9979e5222161", 00:20:27.633 "is_configured": true, 00:20:27.633 "data_offset": 2048, 00:20:27.633 "data_size": 63488 00:20:27.633 }, 00:20:27.633 { 00:20:27.633 "name": "BaseBdev2", 00:20:27.633 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:27.633 "is_configured": true, 00:20:27.633 "data_offset": 2048, 00:20:27.633 "data_size": 63488 00:20:27.633 }, 00:20:27.633 { 00:20:27.633 "name": "BaseBdev3", 00:20:27.633 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:27.633 "is_configured": true, 00:20:27.633 "data_offset": 2048, 00:20:27.633 "data_size": 63488 00:20:27.633 }, 00:20:27.633 { 00:20:27.633 "name": "BaseBdev4", 00:20:27.633 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:27.633 "is_configured": true, 00:20:27.633 "data_offset": 2048, 00:20:27.633 "data_size": 63488 00:20:27.633 } 00:20:27.633 ] 00:20:27.633 }' 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.633 07:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.199 [2024-11-20 07:17:10.271683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:28.199 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:28.200 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:28.200 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:28.200 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:28.200 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.200 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:28.458 [2024-11-20 07:17:10.570972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:28.458 /dev/nbd0 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.458 1+0 records in 00:20:28.458 1+0 records out 00:20:28.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664639 s, 6.2 MB/s 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:28.458 07:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:29.028 496+0 records in 00:20:29.028 496+0 records out 00:20:29.028 97517568 bytes (98 MB, 93 MiB) copied, 0.601646 s, 162 MB/s 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.028 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.287 [2024-11-20 07:17:11.493862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.287 [2024-11-20 07:17:11.513539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.287 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.546 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.546 "name": "raid_bdev1", 00:20:29.546 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:29.546 "strip_size_kb": 64, 00:20:29.546 "state": "online", 00:20:29.546 "raid_level": "raid5f", 00:20:29.546 "superblock": true, 00:20:29.546 "num_base_bdevs": 4, 00:20:29.546 "num_base_bdevs_discovered": 3, 00:20:29.546 "num_base_bdevs_operational": 3, 00:20:29.546 "base_bdevs_list": [ 00:20:29.546 { 00:20:29.546 "name": null, 00:20:29.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.546 "is_configured": false, 00:20:29.546 "data_offset": 0, 00:20:29.546 "data_size": 63488 00:20:29.546 }, 00:20:29.546 { 00:20:29.546 "name": "BaseBdev2", 00:20:29.546 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:29.546 "is_configured": true, 00:20:29.546 "data_offset": 2048, 00:20:29.546 "data_size": 63488 00:20:29.546 }, 00:20:29.546 { 00:20:29.546 "name": "BaseBdev3", 00:20:29.546 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:29.546 "is_configured": true, 00:20:29.546 "data_offset": 2048, 00:20:29.546 "data_size": 63488 00:20:29.546 }, 00:20:29.546 { 00:20:29.546 "name": "BaseBdev4", 00:20:29.546 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:29.546 "is_configured": true, 00:20:29.546 "data_offset": 2048, 00:20:29.546 "data_size": 63488 00:20:29.546 } 00:20:29.546 ] 00:20:29.546 }' 00:20:29.546 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.546 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.807 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.807 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.807 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.807 [2024-11-20 07:17:11.912915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.807 [2024-11-20 07:17:11.933275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:29.807 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.807 07:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:29.807 [2024-11-20 07:17:11.946031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.740 "name": "raid_bdev1", 00:20:30.740 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:30.740 "strip_size_kb": 64, 00:20:30.740 "state": "online", 00:20:30.740 "raid_level": "raid5f", 00:20:30.740 "superblock": true, 00:20:30.740 "num_base_bdevs": 4, 00:20:30.740 "num_base_bdevs_discovered": 4, 00:20:30.740 "num_base_bdevs_operational": 4, 00:20:30.740 "process": { 00:20:30.740 "type": "rebuild", 00:20:30.740 "target": "spare", 00:20:30.740 "progress": { 00:20:30.740 "blocks": 17280, 00:20:30.740 "percent": 9 00:20:30.740 } 00:20:30.740 }, 00:20:30.740 "base_bdevs_list": [ 00:20:30.740 { 00:20:30.740 "name": "spare", 00:20:30.740 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:30.740 "is_configured": true, 00:20:30.740 "data_offset": 2048, 00:20:30.740 "data_size": 63488 00:20:30.740 }, 00:20:30.740 { 00:20:30.740 "name": "BaseBdev2", 00:20:30.740 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:30.740 "is_configured": true, 00:20:30.740 "data_offset": 2048, 00:20:30.740 "data_size": 63488 00:20:30.740 }, 00:20:30.740 { 00:20:30.740 "name": "BaseBdev3", 00:20:30.740 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:30.740 "is_configured": true, 00:20:30.740 "data_offset": 2048, 00:20:30.740 "data_size": 63488 00:20:30.740 }, 00:20:30.740 { 00:20:30.740 "name": "BaseBdev4", 00:20:30.740 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:30.740 "is_configured": true, 00:20:30.740 "data_offset": 2048, 00:20:30.740 "data_size": 63488 00:20:30.740 } 00:20:30.740 ] 00:20:30.740 }' 00:20:30.740 07:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.997 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.997 [2024-11-20 07:17:13.093805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.997 [2024-11-20 07:17:13.155748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.997 [2024-11-20 07:17:13.155831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.997 [2024-11-20 07:17:13.155852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.997 [2024-11-20 07:17:13.155863] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.998 "name": "raid_bdev1", 00:20:30.998 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:30.998 "strip_size_kb": 64, 00:20:30.998 "state": "online", 00:20:30.998 "raid_level": "raid5f", 00:20:30.998 "superblock": true, 00:20:30.998 "num_base_bdevs": 4, 00:20:30.998 "num_base_bdevs_discovered": 3, 00:20:30.998 "num_base_bdevs_operational": 3, 00:20:30.998 "base_bdevs_list": [ 00:20:30.998 { 00:20:30.998 "name": null, 00:20:30.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.998 "is_configured": false, 00:20:30.998 "data_offset": 0, 00:20:30.998 "data_size": 63488 00:20:30.998 }, 00:20:30.998 { 00:20:30.998 "name": "BaseBdev2", 00:20:30.998 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:30.998 "is_configured": true, 00:20:30.998 "data_offset": 2048, 00:20:30.998 "data_size": 63488 00:20:30.998 }, 00:20:30.998 { 00:20:30.998 "name": "BaseBdev3", 00:20:30.998 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:30.998 "is_configured": true, 00:20:30.998 "data_offset": 2048, 00:20:30.998 "data_size": 63488 00:20:30.998 }, 00:20:30.998 { 00:20:30.998 "name": "BaseBdev4", 00:20:30.998 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:30.998 "is_configured": true, 00:20:30.998 "data_offset": 2048, 00:20:30.998 "data_size": 63488 00:20:30.998 } 00:20:30.998 ] 00:20:30.998 }' 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.998 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.565 "name": "raid_bdev1", 00:20:31.565 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:31.565 "strip_size_kb": 64, 00:20:31.565 "state": "online", 00:20:31.565 "raid_level": "raid5f", 00:20:31.565 "superblock": true, 00:20:31.565 "num_base_bdevs": 4, 00:20:31.565 "num_base_bdevs_discovered": 3, 00:20:31.565 "num_base_bdevs_operational": 3, 00:20:31.565 "base_bdevs_list": [ 00:20:31.565 { 00:20:31.565 "name": null, 00:20:31.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.565 "is_configured": false, 00:20:31.565 "data_offset": 0, 00:20:31.565 "data_size": 63488 00:20:31.565 }, 00:20:31.565 { 00:20:31.565 "name": "BaseBdev2", 00:20:31.565 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:31.565 "is_configured": true, 00:20:31.565 "data_offset": 2048, 00:20:31.565 "data_size": 63488 00:20:31.565 }, 00:20:31.565 { 00:20:31.565 "name": "BaseBdev3", 00:20:31.565 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:31.565 "is_configured": true, 00:20:31.565 "data_offset": 2048, 00:20:31.565 "data_size": 63488 00:20:31.565 }, 00:20:31.565 { 00:20:31.565 "name": "BaseBdev4", 00:20:31.565 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:31.565 "is_configured": true, 00:20:31.565 "data_offset": 2048, 00:20:31.565 "data_size": 63488 00:20:31.565 } 00:20:31.565 ] 00:20:31.565 }' 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.565 [2024-11-20 07:17:13.793441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.565 [2024-11-20 07:17:13.812968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.565 07:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:31.565 [2024-11-20 07:17:13.825431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.940 "name": "raid_bdev1", 00:20:32.940 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:32.940 "strip_size_kb": 64, 00:20:32.940 "state": "online", 00:20:32.940 "raid_level": "raid5f", 00:20:32.940 "superblock": true, 00:20:32.940 "num_base_bdevs": 4, 00:20:32.940 "num_base_bdevs_discovered": 4, 00:20:32.940 "num_base_bdevs_operational": 4, 00:20:32.940 "process": { 00:20:32.940 "type": "rebuild", 00:20:32.940 "target": "spare", 00:20:32.940 "progress": { 00:20:32.940 "blocks": 17280, 00:20:32.940 "percent": 9 00:20:32.940 } 00:20:32.940 }, 00:20:32.940 "base_bdevs_list": [ 00:20:32.940 { 00:20:32.940 "name": "spare", 00:20:32.940 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:32.940 "is_configured": true, 00:20:32.940 "data_offset": 2048, 00:20:32.940 "data_size": 63488 00:20:32.940 }, 00:20:32.940 { 00:20:32.940 "name": "BaseBdev2", 00:20:32.940 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:32.940 "is_configured": true, 00:20:32.940 "data_offset": 2048, 00:20:32.940 "data_size": 63488 00:20:32.940 }, 00:20:32.940 { 00:20:32.940 "name": "BaseBdev3", 00:20:32.940 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:32.940 "is_configured": true, 00:20:32.940 "data_offset": 2048, 00:20:32.940 "data_size": 63488 00:20:32.940 }, 00:20:32.940 { 00:20:32.940 "name": "BaseBdev4", 00:20:32.940 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:32.940 "is_configured": true, 00:20:32.940 "data_offset": 2048, 00:20:32.940 "data_size": 63488 00:20:32.940 } 00:20:32.940 ] 00:20:32.940 }' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:32.940 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:32.940 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=670 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.941 "name": "raid_bdev1", 00:20:32.941 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:32.941 "strip_size_kb": 64, 00:20:32.941 "state": "online", 00:20:32.941 "raid_level": "raid5f", 00:20:32.941 "superblock": true, 00:20:32.941 "num_base_bdevs": 4, 00:20:32.941 "num_base_bdevs_discovered": 4, 00:20:32.941 "num_base_bdevs_operational": 4, 00:20:32.941 "process": { 00:20:32.941 "type": "rebuild", 00:20:32.941 "target": "spare", 00:20:32.941 "progress": { 00:20:32.941 "blocks": 21120, 00:20:32.941 "percent": 11 00:20:32.941 } 00:20:32.941 }, 00:20:32.941 "base_bdevs_list": [ 00:20:32.941 { 00:20:32.941 "name": "spare", 00:20:32.941 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:32.941 "is_configured": true, 00:20:32.941 "data_offset": 2048, 00:20:32.941 "data_size": 63488 00:20:32.941 }, 00:20:32.941 { 00:20:32.941 "name": "BaseBdev2", 00:20:32.941 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:32.941 "is_configured": true, 00:20:32.941 "data_offset": 2048, 00:20:32.941 "data_size": 63488 00:20:32.941 }, 00:20:32.941 { 00:20:32.941 "name": "BaseBdev3", 00:20:32.941 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:32.941 "is_configured": true, 00:20:32.941 "data_offset": 2048, 00:20:32.941 "data_size": 63488 00:20:32.941 }, 00:20:32.941 { 00:20:32.941 "name": "BaseBdev4", 00:20:32.941 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:32.941 "is_configured": true, 00:20:32.941 "data_offset": 2048, 00:20:32.941 "data_size": 63488 00:20:32.941 } 00:20:32.941 ] 00:20:32.941 }' 00:20:32.941 07:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.941 07:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.941 07:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.941 07:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.941 07:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.885 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.886 "name": "raid_bdev1", 00:20:33.886 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:33.886 "strip_size_kb": 64, 00:20:33.886 "state": "online", 00:20:33.886 "raid_level": "raid5f", 00:20:33.886 "superblock": true, 00:20:33.886 "num_base_bdevs": 4, 00:20:33.886 "num_base_bdevs_discovered": 4, 00:20:33.886 "num_base_bdevs_operational": 4, 00:20:33.886 "process": { 00:20:33.886 "type": "rebuild", 00:20:33.886 "target": "spare", 00:20:33.886 "progress": { 00:20:33.886 "blocks": 42240, 00:20:33.886 "percent": 22 00:20:33.886 } 00:20:33.886 }, 00:20:33.886 "base_bdevs_list": [ 00:20:33.886 { 00:20:33.886 "name": "spare", 00:20:33.886 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:33.886 "is_configured": true, 00:20:33.886 "data_offset": 2048, 00:20:33.886 "data_size": 63488 00:20:33.886 }, 00:20:33.886 { 00:20:33.886 "name": "BaseBdev2", 00:20:33.886 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:33.886 "is_configured": true, 00:20:33.886 "data_offset": 2048, 00:20:33.886 "data_size": 63488 00:20:33.886 }, 00:20:33.886 { 00:20:33.886 "name": "BaseBdev3", 00:20:33.886 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:33.886 "is_configured": true, 00:20:33.886 "data_offset": 2048, 00:20:33.886 "data_size": 63488 00:20:33.886 }, 00:20:33.886 { 00:20:33.886 "name": "BaseBdev4", 00:20:33.886 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:33.886 "is_configured": true, 00:20:33.886 "data_offset": 2048, 00:20:33.886 "data_size": 63488 00:20:33.886 } 00:20:33.886 ] 00:20:33.886 }' 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.886 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.144 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.144 07:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.077 "name": "raid_bdev1", 00:20:35.077 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:35.077 "strip_size_kb": 64, 00:20:35.077 "state": "online", 00:20:35.077 "raid_level": "raid5f", 00:20:35.077 "superblock": true, 00:20:35.077 "num_base_bdevs": 4, 00:20:35.077 "num_base_bdevs_discovered": 4, 00:20:35.077 "num_base_bdevs_operational": 4, 00:20:35.077 "process": { 00:20:35.077 "type": "rebuild", 00:20:35.077 "target": "spare", 00:20:35.077 "progress": { 00:20:35.077 "blocks": 63360, 00:20:35.077 "percent": 33 00:20:35.077 } 00:20:35.077 }, 00:20:35.077 "base_bdevs_list": [ 00:20:35.077 { 00:20:35.077 "name": "spare", 00:20:35.077 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:35.077 "is_configured": true, 00:20:35.077 "data_offset": 2048, 00:20:35.077 "data_size": 63488 00:20:35.077 }, 00:20:35.077 { 00:20:35.077 "name": "BaseBdev2", 00:20:35.077 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:35.077 "is_configured": true, 00:20:35.077 "data_offset": 2048, 00:20:35.077 "data_size": 63488 00:20:35.077 }, 00:20:35.077 { 00:20:35.077 "name": "BaseBdev3", 00:20:35.077 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:35.077 "is_configured": true, 00:20:35.077 "data_offset": 2048, 00:20:35.077 "data_size": 63488 00:20:35.077 }, 00:20:35.077 { 00:20:35.077 "name": "BaseBdev4", 00:20:35.077 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:35.077 "is_configured": true, 00:20:35.077 "data_offset": 2048, 00:20:35.077 "data_size": 63488 00:20:35.077 } 00:20:35.077 ] 00:20:35.077 }' 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.077 07:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.454 "name": "raid_bdev1", 00:20:36.454 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:36.454 "strip_size_kb": 64, 00:20:36.454 "state": "online", 00:20:36.454 "raid_level": "raid5f", 00:20:36.454 "superblock": true, 00:20:36.454 "num_base_bdevs": 4, 00:20:36.454 "num_base_bdevs_discovered": 4, 00:20:36.454 "num_base_bdevs_operational": 4, 00:20:36.454 "process": { 00:20:36.454 "type": "rebuild", 00:20:36.454 "target": "spare", 00:20:36.454 "progress": { 00:20:36.454 "blocks": 84480, 00:20:36.454 "percent": 44 00:20:36.454 } 00:20:36.454 }, 00:20:36.454 "base_bdevs_list": [ 00:20:36.454 { 00:20:36.454 "name": "spare", 00:20:36.454 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:36.454 "is_configured": true, 00:20:36.454 "data_offset": 2048, 00:20:36.454 "data_size": 63488 00:20:36.454 }, 00:20:36.454 { 00:20:36.454 "name": "BaseBdev2", 00:20:36.454 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:36.454 "is_configured": true, 00:20:36.454 "data_offset": 2048, 00:20:36.454 "data_size": 63488 00:20:36.454 }, 00:20:36.454 { 00:20:36.454 "name": "BaseBdev3", 00:20:36.454 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:36.454 "is_configured": true, 00:20:36.454 "data_offset": 2048, 00:20:36.454 "data_size": 63488 00:20:36.454 }, 00:20:36.454 { 00:20:36.454 "name": "BaseBdev4", 00:20:36.454 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:36.454 "is_configured": true, 00:20:36.454 "data_offset": 2048, 00:20:36.454 "data_size": 63488 00:20:36.454 } 00:20:36.454 ] 00:20:36.454 }' 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.454 07:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.388 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.388 "name": "raid_bdev1", 00:20:37.388 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:37.388 "strip_size_kb": 64, 00:20:37.388 "state": "online", 00:20:37.389 "raid_level": "raid5f", 00:20:37.389 "superblock": true, 00:20:37.389 "num_base_bdevs": 4, 00:20:37.389 "num_base_bdevs_discovered": 4, 00:20:37.389 "num_base_bdevs_operational": 4, 00:20:37.389 "process": { 00:20:37.389 "type": "rebuild", 00:20:37.389 "target": "spare", 00:20:37.389 "progress": { 00:20:37.389 "blocks": 105600, 00:20:37.389 "percent": 55 00:20:37.389 } 00:20:37.389 }, 00:20:37.389 "base_bdevs_list": [ 00:20:37.389 { 00:20:37.389 "name": "spare", 00:20:37.389 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:37.389 "is_configured": true, 00:20:37.389 "data_offset": 2048, 00:20:37.389 "data_size": 63488 00:20:37.389 }, 00:20:37.389 { 00:20:37.389 "name": "BaseBdev2", 00:20:37.389 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:37.389 "is_configured": true, 00:20:37.389 "data_offset": 2048, 00:20:37.389 "data_size": 63488 00:20:37.389 }, 00:20:37.389 { 00:20:37.389 "name": "BaseBdev3", 00:20:37.389 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:37.389 "is_configured": true, 00:20:37.389 "data_offset": 2048, 00:20:37.389 "data_size": 63488 00:20:37.389 }, 00:20:37.389 { 00:20:37.389 "name": "BaseBdev4", 00:20:37.389 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:37.389 "is_configured": true, 00:20:37.389 "data_offset": 2048, 00:20:37.389 "data_size": 63488 00:20:37.389 } 00:20:37.389 ] 00:20:37.389 }' 00:20:37.389 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.389 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.389 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.389 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.389 07:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.325 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.587 "name": "raid_bdev1", 00:20:38.587 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:38.587 "strip_size_kb": 64, 00:20:38.587 "state": "online", 00:20:38.587 "raid_level": "raid5f", 00:20:38.587 "superblock": true, 00:20:38.587 "num_base_bdevs": 4, 00:20:38.587 "num_base_bdevs_discovered": 4, 00:20:38.587 "num_base_bdevs_operational": 4, 00:20:38.587 "process": { 00:20:38.587 "type": "rebuild", 00:20:38.587 "target": "spare", 00:20:38.587 "progress": { 00:20:38.587 "blocks": 128640, 00:20:38.587 "percent": 67 00:20:38.587 } 00:20:38.587 }, 00:20:38.587 "base_bdevs_list": [ 00:20:38.587 { 00:20:38.587 "name": "spare", 00:20:38.587 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:38.587 "is_configured": true, 00:20:38.587 "data_offset": 2048, 00:20:38.587 "data_size": 63488 00:20:38.587 }, 00:20:38.587 { 00:20:38.587 "name": "BaseBdev2", 00:20:38.587 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:38.587 "is_configured": true, 00:20:38.587 "data_offset": 2048, 00:20:38.587 "data_size": 63488 00:20:38.587 }, 00:20:38.587 { 00:20:38.587 "name": "BaseBdev3", 00:20:38.587 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:38.587 "is_configured": true, 00:20:38.587 "data_offset": 2048, 00:20:38.587 "data_size": 63488 00:20:38.587 }, 00:20:38.587 { 00:20:38.587 "name": "BaseBdev4", 00:20:38.587 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:38.587 "is_configured": true, 00:20:38.587 "data_offset": 2048, 00:20:38.587 "data_size": 63488 00:20:38.587 } 00:20:38.587 ] 00:20:38.587 }' 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.587 07:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.523 "name": "raid_bdev1", 00:20:39.523 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:39.523 "strip_size_kb": 64, 00:20:39.523 "state": "online", 00:20:39.523 "raid_level": "raid5f", 00:20:39.523 "superblock": true, 00:20:39.523 "num_base_bdevs": 4, 00:20:39.523 "num_base_bdevs_discovered": 4, 00:20:39.523 "num_base_bdevs_operational": 4, 00:20:39.523 "process": { 00:20:39.523 "type": "rebuild", 00:20:39.523 "target": "spare", 00:20:39.523 "progress": { 00:20:39.523 "blocks": 149760, 00:20:39.523 "percent": 78 00:20:39.523 } 00:20:39.523 }, 00:20:39.523 "base_bdevs_list": [ 00:20:39.523 { 00:20:39.523 "name": "spare", 00:20:39.523 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:39.523 "is_configured": true, 00:20:39.523 "data_offset": 2048, 00:20:39.523 "data_size": 63488 00:20:39.523 }, 00:20:39.523 { 00:20:39.523 "name": "BaseBdev2", 00:20:39.523 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:39.523 "is_configured": true, 00:20:39.523 "data_offset": 2048, 00:20:39.523 "data_size": 63488 00:20:39.523 }, 00:20:39.523 { 00:20:39.523 "name": "BaseBdev3", 00:20:39.523 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:39.523 "is_configured": true, 00:20:39.523 "data_offset": 2048, 00:20:39.523 "data_size": 63488 00:20:39.523 }, 00:20:39.523 { 00:20:39.523 "name": "BaseBdev4", 00:20:39.523 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:39.523 "is_configured": true, 00:20:39.523 "data_offset": 2048, 00:20:39.523 "data_size": 63488 00:20:39.523 } 00:20:39.523 ] 00:20:39.523 }' 00:20:39.523 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.782 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.782 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.782 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.782 07:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.716 "name": "raid_bdev1", 00:20:40.716 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:40.716 "strip_size_kb": 64, 00:20:40.716 "state": "online", 00:20:40.716 "raid_level": "raid5f", 00:20:40.716 "superblock": true, 00:20:40.716 "num_base_bdevs": 4, 00:20:40.716 "num_base_bdevs_discovered": 4, 00:20:40.716 "num_base_bdevs_operational": 4, 00:20:40.716 "process": { 00:20:40.716 "type": "rebuild", 00:20:40.716 "target": "spare", 00:20:40.716 "progress": { 00:20:40.716 "blocks": 170880, 00:20:40.716 "percent": 89 00:20:40.716 } 00:20:40.716 }, 00:20:40.716 "base_bdevs_list": [ 00:20:40.716 { 00:20:40.716 "name": "spare", 00:20:40.716 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:40.716 "is_configured": true, 00:20:40.716 "data_offset": 2048, 00:20:40.716 "data_size": 63488 00:20:40.716 }, 00:20:40.716 { 00:20:40.716 "name": "BaseBdev2", 00:20:40.716 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:40.716 "is_configured": true, 00:20:40.716 "data_offset": 2048, 00:20:40.716 "data_size": 63488 00:20:40.716 }, 00:20:40.716 { 00:20:40.716 "name": "BaseBdev3", 00:20:40.716 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:40.716 "is_configured": true, 00:20:40.716 "data_offset": 2048, 00:20:40.716 "data_size": 63488 00:20:40.716 }, 00:20:40.716 { 00:20:40.716 "name": "BaseBdev4", 00:20:40.716 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:40.716 "is_configured": true, 00:20:40.716 "data_offset": 2048, 00:20:40.716 "data_size": 63488 00:20:40.716 } 00:20:40.716 ] 00:20:40.716 }' 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.716 07:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.976 07:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.976 07:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.917 [2024-11-20 07:17:23.908585] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.917 [2024-11-20 07:17:23.908698] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.917 [2024-11-20 07:17:23.908894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.917 "name": "raid_bdev1", 00:20:41.917 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:41.917 "strip_size_kb": 64, 00:20:41.917 "state": "online", 00:20:41.917 "raid_level": "raid5f", 00:20:41.917 "superblock": true, 00:20:41.917 "num_base_bdevs": 4, 00:20:41.917 "num_base_bdevs_discovered": 4, 00:20:41.917 "num_base_bdevs_operational": 4, 00:20:41.917 "base_bdevs_list": [ 00:20:41.917 { 00:20:41.917 "name": "spare", 00:20:41.917 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev2", 00:20:41.917 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev3", 00:20:41.917 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev4", 00:20:41.917 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 } 00:20:41.917 ] 00:20:41.917 }' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.917 "name": "raid_bdev1", 00:20:41.917 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:41.917 "strip_size_kb": 64, 00:20:41.917 "state": "online", 00:20:41.917 "raid_level": "raid5f", 00:20:41.917 "superblock": true, 00:20:41.917 "num_base_bdevs": 4, 00:20:41.917 "num_base_bdevs_discovered": 4, 00:20:41.917 "num_base_bdevs_operational": 4, 00:20:41.917 "base_bdevs_list": [ 00:20:41.917 { 00:20:41.917 "name": "spare", 00:20:41.917 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev2", 00:20:41.917 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev3", 00:20:41.917 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 }, 00:20:41.917 { 00:20:41.917 "name": "BaseBdev4", 00:20:41.917 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:41.917 "is_configured": true, 00:20:41.917 "data_offset": 2048, 00:20:41.917 "data_size": 63488 00:20:41.917 } 00:20:41.917 ] 00:20:41.917 }' 00:20:41.917 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.175 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.176 "name": "raid_bdev1", 00:20:42.176 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:42.176 "strip_size_kb": 64, 00:20:42.176 "state": "online", 00:20:42.176 "raid_level": "raid5f", 00:20:42.176 "superblock": true, 00:20:42.176 "num_base_bdevs": 4, 00:20:42.176 "num_base_bdevs_discovered": 4, 00:20:42.176 "num_base_bdevs_operational": 4, 00:20:42.176 "base_bdevs_list": [ 00:20:42.176 { 00:20:42.176 "name": "spare", 00:20:42.176 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:42.176 "is_configured": true, 00:20:42.176 "data_offset": 2048, 00:20:42.176 "data_size": 63488 00:20:42.176 }, 00:20:42.176 { 00:20:42.176 "name": "BaseBdev2", 00:20:42.176 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:42.176 "is_configured": true, 00:20:42.176 "data_offset": 2048, 00:20:42.176 "data_size": 63488 00:20:42.176 }, 00:20:42.176 { 00:20:42.176 "name": "BaseBdev3", 00:20:42.176 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:42.176 "is_configured": true, 00:20:42.176 "data_offset": 2048, 00:20:42.176 "data_size": 63488 00:20:42.176 }, 00:20:42.176 { 00:20:42.176 "name": "BaseBdev4", 00:20:42.176 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:42.176 "is_configured": true, 00:20:42.176 "data_offset": 2048, 00:20:42.176 "data_size": 63488 00:20:42.176 } 00:20:42.176 ] 00:20:42.176 }' 00:20:42.176 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.176 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.433 [2024-11-20 07:17:24.672373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.433 [2024-11-20 07:17:24.672416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.433 [2024-11-20 07:17:24.672521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.433 [2024-11-20 07:17:24.672645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.433 [2024-11-20 07:17:24.672671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:42.433 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.434 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.434 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.691 07:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:42.949 /dev/nbd0 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.949 1+0 records in 00:20:42.949 1+0 records out 00:20:42.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328552 s, 12.5 MB/s 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.949 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:43.207 /dev/nbd1 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.207 1+0 records in 00:20:43.207 1+0 records out 00:20:43.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443483 s, 9.2 MB/s 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.207 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.466 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.724 07:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.983 [2024-11-20 07:17:26.161396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.983 [2024-11-20 07:17:26.161487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.983 [2024-11-20 07:17:26.161524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:43.983 [2024-11-20 07:17:26.161538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.983 [2024-11-20 07:17:26.164430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.983 [2024-11-20 07:17:26.164492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.983 [2024-11-20 07:17:26.164620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:43.983 [2024-11-20 07:17:26.164695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.983 [2024-11-20 07:17:26.164894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.983 [2024-11-20 07:17:26.165016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.983 [2024-11-20 07:17:26.165125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:43.983 spare 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.983 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.241 [2024-11-20 07:17:26.265060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:44.241 [2024-11-20 07:17:26.265136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:44.241 [2024-11-20 07:17:26.265582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:44.241 [2024-11-20 07:17:26.275943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:44.241 [2024-11-20 07:17:26.276059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:44.241 [2024-11-20 07:17:26.276464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.241 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.241 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:44.241 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.241 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.242 "name": "raid_bdev1", 00:20:44.242 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:44.242 "strip_size_kb": 64, 00:20:44.242 "state": "online", 00:20:44.242 "raid_level": "raid5f", 00:20:44.242 "superblock": true, 00:20:44.242 "num_base_bdevs": 4, 00:20:44.242 "num_base_bdevs_discovered": 4, 00:20:44.242 "num_base_bdevs_operational": 4, 00:20:44.242 "base_bdevs_list": [ 00:20:44.242 { 00:20:44.242 "name": "spare", 00:20:44.242 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:44.242 "is_configured": true, 00:20:44.242 "data_offset": 2048, 00:20:44.242 "data_size": 63488 00:20:44.242 }, 00:20:44.242 { 00:20:44.242 "name": "BaseBdev2", 00:20:44.242 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:44.242 "is_configured": true, 00:20:44.242 "data_offset": 2048, 00:20:44.242 "data_size": 63488 00:20:44.242 }, 00:20:44.242 { 00:20:44.242 "name": "BaseBdev3", 00:20:44.242 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:44.242 "is_configured": true, 00:20:44.242 "data_offset": 2048, 00:20:44.242 "data_size": 63488 00:20:44.242 }, 00:20:44.242 { 00:20:44.242 "name": "BaseBdev4", 00:20:44.242 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:44.242 "is_configured": true, 00:20:44.242 "data_offset": 2048, 00:20:44.242 "data_size": 63488 00:20:44.242 } 00:20:44.242 ] 00:20:44.242 }' 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.242 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.500 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.758 "name": "raid_bdev1", 00:20:44.758 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:44.758 "strip_size_kb": 64, 00:20:44.758 "state": "online", 00:20:44.758 "raid_level": "raid5f", 00:20:44.758 "superblock": true, 00:20:44.758 "num_base_bdevs": 4, 00:20:44.758 "num_base_bdevs_discovered": 4, 00:20:44.758 "num_base_bdevs_operational": 4, 00:20:44.758 "base_bdevs_list": [ 00:20:44.758 { 00:20:44.758 "name": "spare", 00:20:44.758 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:44.758 "is_configured": true, 00:20:44.758 "data_offset": 2048, 00:20:44.758 "data_size": 63488 00:20:44.758 }, 00:20:44.758 { 00:20:44.758 "name": "BaseBdev2", 00:20:44.758 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:44.758 "is_configured": true, 00:20:44.758 "data_offset": 2048, 00:20:44.758 "data_size": 63488 00:20:44.758 }, 00:20:44.758 { 00:20:44.758 "name": "BaseBdev3", 00:20:44.758 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:44.758 "is_configured": true, 00:20:44.758 "data_offset": 2048, 00:20:44.758 "data_size": 63488 00:20:44.758 }, 00:20:44.758 { 00:20:44.758 "name": "BaseBdev4", 00:20:44.758 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:44.758 "is_configured": true, 00:20:44.758 "data_offset": 2048, 00:20:44.758 "data_size": 63488 00:20:44.758 } 00:20:44.758 ] 00:20:44.758 }' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.758 [2024-11-20 07:17:26.899205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.758 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.758 "name": "raid_bdev1", 00:20:44.758 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:44.758 "strip_size_kb": 64, 00:20:44.758 "state": "online", 00:20:44.758 "raid_level": "raid5f", 00:20:44.758 "superblock": true, 00:20:44.758 "num_base_bdevs": 4, 00:20:44.758 "num_base_bdevs_discovered": 3, 00:20:44.758 "num_base_bdevs_operational": 3, 00:20:44.758 "base_bdevs_list": [ 00:20:44.758 { 00:20:44.759 "name": null, 00:20:44.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.759 "is_configured": false, 00:20:44.759 "data_offset": 0, 00:20:44.759 "data_size": 63488 00:20:44.759 }, 00:20:44.759 { 00:20:44.759 "name": "BaseBdev2", 00:20:44.759 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:44.759 "is_configured": true, 00:20:44.759 "data_offset": 2048, 00:20:44.759 "data_size": 63488 00:20:44.759 }, 00:20:44.759 { 00:20:44.759 "name": "BaseBdev3", 00:20:44.759 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:44.759 "is_configured": true, 00:20:44.759 "data_offset": 2048, 00:20:44.759 "data_size": 63488 00:20:44.759 }, 00:20:44.759 { 00:20:44.759 "name": "BaseBdev4", 00:20:44.759 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:44.759 "is_configured": true, 00:20:44.759 "data_offset": 2048, 00:20:44.759 "data_size": 63488 00:20:44.759 } 00:20:44.759 ] 00:20:44.759 }' 00:20:44.759 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.759 07:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.325 07:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:45.325 07:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.325 07:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.325 [2024-11-20 07:17:27.326501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.325 [2024-11-20 07:17:27.326729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:45.325 [2024-11-20 07:17:27.326750] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:45.325 [2024-11-20 07:17:27.326797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.325 [2024-11-20 07:17:27.346122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:45.325 07:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.325 07:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:45.325 [2024-11-20 07:17:27.358228] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.260 "name": "raid_bdev1", 00:20:46.260 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:46.260 "strip_size_kb": 64, 00:20:46.260 "state": "online", 00:20:46.260 "raid_level": "raid5f", 00:20:46.260 "superblock": true, 00:20:46.260 "num_base_bdevs": 4, 00:20:46.260 "num_base_bdevs_discovered": 4, 00:20:46.260 "num_base_bdevs_operational": 4, 00:20:46.260 "process": { 00:20:46.260 "type": "rebuild", 00:20:46.260 "target": "spare", 00:20:46.260 "progress": { 00:20:46.260 "blocks": 17280, 00:20:46.260 "percent": 9 00:20:46.260 } 00:20:46.260 }, 00:20:46.260 "base_bdevs_list": [ 00:20:46.260 { 00:20:46.260 "name": "spare", 00:20:46.260 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:46.260 "is_configured": true, 00:20:46.260 "data_offset": 2048, 00:20:46.260 "data_size": 63488 00:20:46.260 }, 00:20:46.260 { 00:20:46.260 "name": "BaseBdev2", 00:20:46.260 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:46.260 "is_configured": true, 00:20:46.260 "data_offset": 2048, 00:20:46.260 "data_size": 63488 00:20:46.260 }, 00:20:46.260 { 00:20:46.260 "name": "BaseBdev3", 00:20:46.260 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:46.260 "is_configured": true, 00:20:46.260 "data_offset": 2048, 00:20:46.260 "data_size": 63488 00:20:46.260 }, 00:20:46.260 { 00:20:46.260 "name": "BaseBdev4", 00:20:46.260 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:46.260 "is_configured": true, 00:20:46.260 "data_offset": 2048, 00:20:46.260 "data_size": 63488 00:20:46.260 } 00:20:46.260 ] 00:20:46.260 }' 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.260 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.260 [2024-11-20 07:17:28.474587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.519 [2024-11-20 07:17:28.568315] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:46.519 [2024-11-20 07:17:28.568454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.519 [2024-11-20 07:17:28.568481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.519 [2024-11-20 07:17:28.568497] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.519 "name": "raid_bdev1", 00:20:46.519 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:46.519 "strip_size_kb": 64, 00:20:46.519 "state": "online", 00:20:46.519 "raid_level": "raid5f", 00:20:46.519 "superblock": true, 00:20:46.519 "num_base_bdevs": 4, 00:20:46.519 "num_base_bdevs_discovered": 3, 00:20:46.519 "num_base_bdevs_operational": 3, 00:20:46.519 "base_bdevs_list": [ 00:20:46.519 { 00:20:46.519 "name": null, 00:20:46.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.519 "is_configured": false, 00:20:46.519 "data_offset": 0, 00:20:46.519 "data_size": 63488 00:20:46.519 }, 00:20:46.519 { 00:20:46.519 "name": "BaseBdev2", 00:20:46.519 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:46.519 "is_configured": true, 00:20:46.519 "data_offset": 2048, 00:20:46.519 "data_size": 63488 00:20:46.519 }, 00:20:46.519 { 00:20:46.519 "name": "BaseBdev3", 00:20:46.519 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:46.519 "is_configured": true, 00:20:46.519 "data_offset": 2048, 00:20:46.519 "data_size": 63488 00:20:46.519 }, 00:20:46.519 { 00:20:46.519 "name": "BaseBdev4", 00:20:46.519 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:46.519 "is_configured": true, 00:20:46.519 "data_offset": 2048, 00:20:46.519 "data_size": 63488 00:20:46.519 } 00:20:46.519 ] 00:20:46.519 }' 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.519 07:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.778 07:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:46.778 07:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.778 07:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.778 [2024-11-20 07:17:29.032933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.778 [2024-11-20 07:17:29.033027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.778 [2024-11-20 07:17:29.033064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:46.778 [2024-11-20 07:17:29.033080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.778 [2024-11-20 07:17:29.033720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.778 [2024-11-20 07:17:29.033831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.778 [2024-11-20 07:17:29.033972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:46.778 [2024-11-20 07:17:29.033994] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:46.778 [2024-11-20 07:17:29.034007] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:46.778 [2024-11-20 07:17:29.034050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.043 [2024-11-20 07:17:29.053365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:47.043 spare 00:20:47.043 07:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.043 07:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:47.043 [2024-11-20 07:17:29.065917] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.993 "name": "raid_bdev1", 00:20:47.993 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:47.993 "strip_size_kb": 64, 00:20:47.993 "state": "online", 00:20:47.993 "raid_level": "raid5f", 00:20:47.993 "superblock": true, 00:20:47.993 "num_base_bdevs": 4, 00:20:47.993 "num_base_bdevs_discovered": 4, 00:20:47.993 "num_base_bdevs_operational": 4, 00:20:47.993 "process": { 00:20:47.993 "type": "rebuild", 00:20:47.993 "target": "spare", 00:20:47.993 "progress": { 00:20:47.993 "blocks": 17280, 00:20:47.993 "percent": 9 00:20:47.993 } 00:20:47.993 }, 00:20:47.993 "base_bdevs_list": [ 00:20:47.993 { 00:20:47.993 "name": "spare", 00:20:47.993 "uuid": "9fc6f695-72f7-5966-872a-98ac31fbe841", 00:20:47.993 "is_configured": true, 00:20:47.993 "data_offset": 2048, 00:20:47.993 "data_size": 63488 00:20:47.993 }, 00:20:47.993 { 00:20:47.993 "name": "BaseBdev2", 00:20:47.993 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:47.993 "is_configured": true, 00:20:47.993 "data_offset": 2048, 00:20:47.993 "data_size": 63488 00:20:47.993 }, 00:20:47.993 { 00:20:47.993 "name": "BaseBdev3", 00:20:47.993 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:47.993 "is_configured": true, 00:20:47.993 "data_offset": 2048, 00:20:47.993 "data_size": 63488 00:20:47.993 }, 00:20:47.993 { 00:20:47.993 "name": "BaseBdev4", 00:20:47.993 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:47.993 "is_configured": true, 00:20:47.993 "data_offset": 2048, 00:20:47.993 "data_size": 63488 00:20:47.993 } 00:20:47.993 ] 00:20:47.993 }' 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.993 [2024-11-20 07:17:30.165572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.993 [2024-11-20 07:17:30.175370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:47.993 [2024-11-20 07:17:30.175563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.993 [2024-11-20 07:17:30.175609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.993 [2024-11-20 07:17:30.175621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.993 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.251 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.251 "name": "raid_bdev1", 00:20:48.251 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:48.251 "strip_size_kb": 64, 00:20:48.251 "state": "online", 00:20:48.251 "raid_level": "raid5f", 00:20:48.251 "superblock": true, 00:20:48.251 "num_base_bdevs": 4, 00:20:48.251 "num_base_bdevs_discovered": 3, 00:20:48.251 "num_base_bdevs_operational": 3, 00:20:48.251 "base_bdevs_list": [ 00:20:48.251 { 00:20:48.251 "name": null, 00:20:48.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.251 "is_configured": false, 00:20:48.251 "data_offset": 0, 00:20:48.251 "data_size": 63488 00:20:48.251 }, 00:20:48.251 { 00:20:48.251 "name": "BaseBdev2", 00:20:48.251 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:48.251 "is_configured": true, 00:20:48.251 "data_offset": 2048, 00:20:48.251 "data_size": 63488 00:20:48.251 }, 00:20:48.251 { 00:20:48.251 "name": "BaseBdev3", 00:20:48.251 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:48.251 "is_configured": true, 00:20:48.251 "data_offset": 2048, 00:20:48.251 "data_size": 63488 00:20:48.251 }, 00:20:48.251 { 00:20:48.251 "name": "BaseBdev4", 00:20:48.251 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:48.251 "is_configured": true, 00:20:48.251 "data_offset": 2048, 00:20:48.251 "data_size": 63488 00:20:48.251 } 00:20:48.251 ] 00:20:48.251 }' 00:20:48.251 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.251 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.519 "name": "raid_bdev1", 00:20:48.519 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:48.519 "strip_size_kb": 64, 00:20:48.519 "state": "online", 00:20:48.519 "raid_level": "raid5f", 00:20:48.519 "superblock": true, 00:20:48.519 "num_base_bdevs": 4, 00:20:48.519 "num_base_bdevs_discovered": 3, 00:20:48.519 "num_base_bdevs_operational": 3, 00:20:48.519 "base_bdevs_list": [ 00:20:48.519 { 00:20:48.519 "name": null, 00:20:48.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.519 "is_configured": false, 00:20:48.519 "data_offset": 0, 00:20:48.519 "data_size": 63488 00:20:48.519 }, 00:20:48.519 { 00:20:48.519 "name": "BaseBdev2", 00:20:48.519 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:48.519 "is_configured": true, 00:20:48.519 "data_offset": 2048, 00:20:48.519 "data_size": 63488 00:20:48.519 }, 00:20:48.519 { 00:20:48.519 "name": "BaseBdev3", 00:20:48.519 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:48.519 "is_configured": true, 00:20:48.519 "data_offset": 2048, 00:20:48.519 "data_size": 63488 00:20:48.519 }, 00:20:48.519 { 00:20:48.519 "name": "BaseBdev4", 00:20:48.519 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:48.519 "is_configured": true, 00:20:48.519 "data_offset": 2048, 00:20:48.519 "data_size": 63488 00:20:48.519 } 00:20:48.519 ] 00:20:48.519 }' 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.519 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.519 [2024-11-20 07:17:30.777131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.519 [2024-11-20 07:17:30.777201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.519 [2024-11-20 07:17:30.777246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:48.519 [2024-11-20 07:17:30.777258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.519 [2024-11-20 07:17:30.777863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.519 [2024-11-20 07:17:30.777894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.519 [2024-11-20 07:17:30.778009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:48.519 [2024-11-20 07:17:30.778025] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:48.519 [2024-11-20 07:17:30.778039] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:48.519 [2024-11-20 07:17:30.778052] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:48.776 BaseBdev1 00:20:48.776 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.776 07:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.710 "name": "raid_bdev1", 00:20:49.710 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:49.710 "strip_size_kb": 64, 00:20:49.710 "state": "online", 00:20:49.710 "raid_level": "raid5f", 00:20:49.710 "superblock": true, 00:20:49.710 "num_base_bdevs": 4, 00:20:49.710 "num_base_bdevs_discovered": 3, 00:20:49.710 "num_base_bdevs_operational": 3, 00:20:49.710 "base_bdevs_list": [ 00:20:49.710 { 00:20:49.710 "name": null, 00:20:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.710 "is_configured": false, 00:20:49.710 "data_offset": 0, 00:20:49.710 "data_size": 63488 00:20:49.710 }, 00:20:49.710 { 00:20:49.710 "name": "BaseBdev2", 00:20:49.710 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:49.710 "is_configured": true, 00:20:49.710 "data_offset": 2048, 00:20:49.710 "data_size": 63488 00:20:49.710 }, 00:20:49.710 { 00:20:49.710 "name": "BaseBdev3", 00:20:49.710 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:49.710 "is_configured": true, 00:20:49.710 "data_offset": 2048, 00:20:49.710 "data_size": 63488 00:20:49.710 }, 00:20:49.710 { 00:20:49.710 "name": "BaseBdev4", 00:20:49.710 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:49.710 "is_configured": true, 00:20:49.710 "data_offset": 2048, 00:20:49.710 "data_size": 63488 00:20:49.710 } 00:20:49.710 ] 00:20:49.710 }' 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.710 07:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.970 "name": "raid_bdev1", 00:20:49.970 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:49.970 "strip_size_kb": 64, 00:20:49.970 "state": "online", 00:20:49.970 "raid_level": "raid5f", 00:20:49.970 "superblock": true, 00:20:49.970 "num_base_bdevs": 4, 00:20:49.970 "num_base_bdevs_discovered": 3, 00:20:49.970 "num_base_bdevs_operational": 3, 00:20:49.970 "base_bdevs_list": [ 00:20:49.970 { 00:20:49.970 "name": null, 00:20:49.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.970 "is_configured": false, 00:20:49.970 "data_offset": 0, 00:20:49.970 "data_size": 63488 00:20:49.970 }, 00:20:49.970 { 00:20:49.970 "name": "BaseBdev2", 00:20:49.970 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:49.970 "is_configured": true, 00:20:49.970 "data_offset": 2048, 00:20:49.970 "data_size": 63488 00:20:49.970 }, 00:20:49.970 { 00:20:49.970 "name": "BaseBdev3", 00:20:49.970 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:49.970 "is_configured": true, 00:20:49.970 "data_offset": 2048, 00:20:49.970 "data_size": 63488 00:20:49.970 }, 00:20:49.970 { 00:20:49.970 "name": "BaseBdev4", 00:20:49.970 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:49.970 "is_configured": true, 00:20:49.970 "data_offset": 2048, 00:20:49.970 "data_size": 63488 00:20:49.970 } 00:20:49.970 ] 00:20:49.970 }' 00:20:49.970 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.231 [2024-11-20 07:17:32.306818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.231 [2024-11-20 07:17:32.307084] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:50.231 [2024-11-20 07:17:32.307153] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:50.231 request: 00:20:50.231 { 00:20:50.231 "base_bdev": "BaseBdev1", 00:20:50.231 "raid_bdev": "raid_bdev1", 00:20:50.231 "method": "bdev_raid_add_base_bdev", 00:20:50.231 "req_id": 1 00:20:50.231 } 00:20:50.231 Got JSON-RPC error response 00:20:50.231 response: 00:20:50.231 { 00:20:50.231 "code": -22, 00:20:50.231 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:50.231 } 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.231 07:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.173 "name": "raid_bdev1", 00:20:51.173 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:51.173 "strip_size_kb": 64, 00:20:51.173 "state": "online", 00:20:51.173 "raid_level": "raid5f", 00:20:51.173 "superblock": true, 00:20:51.173 "num_base_bdevs": 4, 00:20:51.173 "num_base_bdevs_discovered": 3, 00:20:51.173 "num_base_bdevs_operational": 3, 00:20:51.173 "base_bdevs_list": [ 00:20:51.173 { 00:20:51.173 "name": null, 00:20:51.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.173 "is_configured": false, 00:20:51.173 "data_offset": 0, 00:20:51.173 "data_size": 63488 00:20:51.173 }, 00:20:51.173 { 00:20:51.173 "name": "BaseBdev2", 00:20:51.173 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:51.173 "is_configured": true, 00:20:51.173 "data_offset": 2048, 00:20:51.173 "data_size": 63488 00:20:51.173 }, 00:20:51.173 { 00:20:51.173 "name": "BaseBdev3", 00:20:51.173 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:51.173 "is_configured": true, 00:20:51.173 "data_offset": 2048, 00:20:51.173 "data_size": 63488 00:20:51.173 }, 00:20:51.173 { 00:20:51.173 "name": "BaseBdev4", 00:20:51.173 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:51.173 "is_configured": true, 00:20:51.173 "data_offset": 2048, 00:20:51.173 "data_size": 63488 00:20:51.173 } 00:20:51.173 ] 00:20:51.173 }' 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.173 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.739 "name": "raid_bdev1", 00:20:51.739 "uuid": "d128bc32-8be2-4df3-a61e-7de12e7c01b2", 00:20:51.739 "strip_size_kb": 64, 00:20:51.739 "state": "online", 00:20:51.739 "raid_level": "raid5f", 00:20:51.739 "superblock": true, 00:20:51.739 "num_base_bdevs": 4, 00:20:51.739 "num_base_bdevs_discovered": 3, 00:20:51.739 "num_base_bdevs_operational": 3, 00:20:51.739 "base_bdevs_list": [ 00:20:51.739 { 00:20:51.739 "name": null, 00:20:51.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.739 "is_configured": false, 00:20:51.739 "data_offset": 0, 00:20:51.739 "data_size": 63488 00:20:51.739 }, 00:20:51.739 { 00:20:51.739 "name": "BaseBdev2", 00:20:51.739 "uuid": "3555560a-745e-5140-848e-3c1ad00e3f1e", 00:20:51.739 "is_configured": true, 00:20:51.739 "data_offset": 2048, 00:20:51.739 "data_size": 63488 00:20:51.739 }, 00:20:51.739 { 00:20:51.739 "name": "BaseBdev3", 00:20:51.739 "uuid": "b4eedd04-5319-593c-b45d-3bc900bc9679", 00:20:51.739 "is_configured": true, 00:20:51.739 "data_offset": 2048, 00:20:51.739 "data_size": 63488 00:20:51.739 }, 00:20:51.739 { 00:20:51.739 "name": "BaseBdev4", 00:20:51.739 "uuid": "f101e580-5314-5ad8-9d0b-3a314dd9a49e", 00:20:51.739 "is_configured": true, 00:20:51.739 "data_offset": 2048, 00:20:51.739 "data_size": 63488 00:20:51.739 } 00:20:51.739 ] 00:20:51.739 }' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85641 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85641 ']' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85641 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85641 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.739 killing process with pid 85641 00:20:51.739 Received shutdown signal, test time was about 60.000000 seconds 00:20:51.739 00:20:51.739 Latency(us) 00:20:51.739 [2024-11-20T07:17:34.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.739 [2024-11-20T07:17:34.004Z] =================================================================================================================== 00:20:51.739 [2024-11-20T07:17:34.004Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85641' 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85641 00:20:51.739 [2024-11-20 07:17:33.894587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.739 07:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85641 00:20:51.739 [2024-11-20 07:17:33.894750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.739 [2024-11-20 07:17:33.894851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.739 [2024-11-20 07:17:33.894875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:52.306 [2024-11-20 07:17:34.417775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.681 07:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:53.681 00:20:53.681 real 0m27.177s 00:20:53.681 user 0m33.875s 00:20:53.681 sys 0m2.882s 00:20:53.681 07:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.681 ************************************ 00:20:53.681 END TEST raid5f_rebuild_test_sb 00:20:53.681 ************************************ 00:20:53.681 07:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.681 07:17:35 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:53.681 07:17:35 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:53.681 07:17:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:53.681 07:17:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.681 07:17:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.681 ************************************ 00:20:53.681 START TEST raid_state_function_test_sb_4k 00:20:53.681 ************************************ 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86453 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86453' 00:20:53.681 Process raid pid: 86453 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86453 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86453 ']' 00:20:53.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.681 07:17:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.681 [2024-11-20 07:17:35.891211] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:53.681 [2024-11-20 07:17:35.891540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.939 [2024-11-20 07:17:36.064407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.197 [2024-11-20 07:17:36.204056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.197 [2024-11-20 07:17:36.452121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.197 [2024-11-20 07:17:36.452168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.762 [2024-11-20 07:17:36.872925] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:54.762 [2024-11-20 07:17:36.872987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:54.762 [2024-11-20 07:17:36.873000] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.762 [2024-11-20 07:17:36.873011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.762 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.763 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.763 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.763 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.763 "name": "Existed_Raid", 00:20:54.763 "uuid": "450be5e5-a381-4a0c-80f3-de94784aebe7", 00:20:54.763 "strip_size_kb": 0, 00:20:54.763 "state": "configuring", 00:20:54.763 "raid_level": "raid1", 00:20:54.763 "superblock": true, 00:20:54.763 "num_base_bdevs": 2, 00:20:54.763 "num_base_bdevs_discovered": 0, 00:20:54.763 "num_base_bdevs_operational": 2, 00:20:54.763 "base_bdevs_list": [ 00:20:54.763 { 00:20:54.763 "name": "BaseBdev1", 00:20:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.763 "is_configured": false, 00:20:54.763 "data_offset": 0, 00:20:54.763 "data_size": 0 00:20:54.763 }, 00:20:54.763 { 00:20:54.763 "name": "BaseBdev2", 00:20:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.763 "is_configured": false, 00:20:54.763 "data_offset": 0, 00:20:54.763 "data_size": 0 00:20:54.763 } 00:20:54.763 ] 00:20:54.763 }' 00:20:54.763 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.763 07:17:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.328 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 [2024-11-20 07:17:37.304191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:55.329 [2024-11-20 07:17:37.304342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 [2024-11-20 07:17:37.312173] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.329 [2024-11-20 07:17:37.312228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.329 [2024-11-20 07:17:37.312238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.329 [2024-11-20 07:17:37.312252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 [2024-11-20 07:17:37.362113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.329 BaseBdev1 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 [ 00:20:55.329 { 00:20:55.329 "name": "BaseBdev1", 00:20:55.329 "aliases": [ 00:20:55.329 "0a18577b-df53-4acf-a3ec-ae260b6b502b" 00:20:55.329 ], 00:20:55.329 "product_name": "Malloc disk", 00:20:55.329 "block_size": 4096, 00:20:55.329 "num_blocks": 8192, 00:20:55.329 "uuid": "0a18577b-df53-4acf-a3ec-ae260b6b502b", 00:20:55.329 "assigned_rate_limits": { 00:20:55.329 "rw_ios_per_sec": 0, 00:20:55.329 "rw_mbytes_per_sec": 0, 00:20:55.329 "r_mbytes_per_sec": 0, 00:20:55.329 "w_mbytes_per_sec": 0 00:20:55.329 }, 00:20:55.329 "claimed": true, 00:20:55.329 "claim_type": "exclusive_write", 00:20:55.329 "zoned": false, 00:20:55.329 "supported_io_types": { 00:20:55.329 "read": true, 00:20:55.329 "write": true, 00:20:55.329 "unmap": true, 00:20:55.329 "flush": true, 00:20:55.329 "reset": true, 00:20:55.329 "nvme_admin": false, 00:20:55.329 "nvme_io": false, 00:20:55.329 "nvme_io_md": false, 00:20:55.329 "write_zeroes": true, 00:20:55.329 "zcopy": true, 00:20:55.329 "get_zone_info": false, 00:20:55.329 "zone_management": false, 00:20:55.329 "zone_append": false, 00:20:55.329 "compare": false, 00:20:55.329 "compare_and_write": false, 00:20:55.329 "abort": true, 00:20:55.329 "seek_hole": false, 00:20:55.329 "seek_data": false, 00:20:55.329 "copy": true, 00:20:55.329 "nvme_iov_md": false 00:20:55.329 }, 00:20:55.329 "memory_domains": [ 00:20:55.329 { 00:20:55.329 "dma_device_id": "system", 00:20:55.329 "dma_device_type": 1 00:20:55.329 }, 00:20:55.329 { 00:20:55.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.329 "dma_device_type": 2 00:20:55.329 } 00:20:55.329 ], 00:20:55.329 "driver_specific": {} 00:20:55.329 } 00:20:55.329 ] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.329 "name": "Existed_Raid", 00:20:55.329 "uuid": "d1545726-bd45-45ff-bf92-beda9e6b6096", 00:20:55.329 "strip_size_kb": 0, 00:20:55.329 "state": "configuring", 00:20:55.329 "raid_level": "raid1", 00:20:55.329 "superblock": true, 00:20:55.329 "num_base_bdevs": 2, 00:20:55.329 "num_base_bdevs_discovered": 1, 00:20:55.329 "num_base_bdevs_operational": 2, 00:20:55.329 "base_bdevs_list": [ 00:20:55.329 { 00:20:55.329 "name": "BaseBdev1", 00:20:55.329 "uuid": "0a18577b-df53-4acf-a3ec-ae260b6b502b", 00:20:55.329 "is_configured": true, 00:20:55.329 "data_offset": 256, 00:20:55.329 "data_size": 7936 00:20:55.329 }, 00:20:55.329 { 00:20:55.329 "name": "BaseBdev2", 00:20:55.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.329 "is_configured": false, 00:20:55.329 "data_offset": 0, 00:20:55.329 "data_size": 0 00:20:55.329 } 00:20:55.329 ] 00:20:55.329 }' 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.329 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.587 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:55.587 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.587 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.846 [2024-11-20 07:17:37.853568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:55.846 [2024-11-20 07:17:37.853659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.846 [2024-11-20 07:17:37.861589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.846 [2024-11-20 07:17:37.863760] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.846 [2024-11-20 07:17:37.863815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.846 "name": "Existed_Raid", 00:20:55.846 "uuid": "e4d7108b-8974-4b00-b8e4-5da8eb4199ac", 00:20:55.846 "strip_size_kb": 0, 00:20:55.846 "state": "configuring", 00:20:55.846 "raid_level": "raid1", 00:20:55.846 "superblock": true, 00:20:55.846 "num_base_bdevs": 2, 00:20:55.846 "num_base_bdevs_discovered": 1, 00:20:55.846 "num_base_bdevs_operational": 2, 00:20:55.846 "base_bdevs_list": [ 00:20:55.846 { 00:20:55.846 "name": "BaseBdev1", 00:20:55.846 "uuid": "0a18577b-df53-4acf-a3ec-ae260b6b502b", 00:20:55.846 "is_configured": true, 00:20:55.846 "data_offset": 256, 00:20:55.846 "data_size": 7936 00:20:55.846 }, 00:20:55.846 { 00:20:55.846 "name": "BaseBdev2", 00:20:55.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.846 "is_configured": false, 00:20:55.846 "data_offset": 0, 00:20:55.846 "data_size": 0 00:20:55.846 } 00:20:55.846 ] 00:20:55.846 }' 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.846 07:17:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.104 [2024-11-20 07:17:38.331858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.104 [2024-11-20 07:17:38.332181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:56.104 [2024-11-20 07:17:38.332201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:56.104 BaseBdev2 00:20:56.104 [2024-11-20 07:17:38.332569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:56.104 [2024-11-20 07:17:38.332775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:56.104 [2024-11-20 07:17:38.332800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.104 [2024-11-20 07:17:38.332977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.104 [ 00:20:56.104 { 00:20:56.104 "name": "BaseBdev2", 00:20:56.104 "aliases": [ 00:20:56.104 "aaae13bc-7ff6-4078-ac78-c4b5042586ba" 00:20:56.104 ], 00:20:56.104 "product_name": "Malloc disk", 00:20:56.104 "block_size": 4096, 00:20:56.104 "num_blocks": 8192, 00:20:56.104 "uuid": "aaae13bc-7ff6-4078-ac78-c4b5042586ba", 00:20:56.104 "assigned_rate_limits": { 00:20:56.104 "rw_ios_per_sec": 0, 00:20:56.104 "rw_mbytes_per_sec": 0, 00:20:56.104 "r_mbytes_per_sec": 0, 00:20:56.104 "w_mbytes_per_sec": 0 00:20:56.104 }, 00:20:56.104 "claimed": true, 00:20:56.104 "claim_type": "exclusive_write", 00:20:56.104 "zoned": false, 00:20:56.104 "supported_io_types": { 00:20:56.104 "read": true, 00:20:56.104 "write": true, 00:20:56.104 "unmap": true, 00:20:56.104 "flush": true, 00:20:56.104 "reset": true, 00:20:56.104 "nvme_admin": false, 00:20:56.104 "nvme_io": false, 00:20:56.104 "nvme_io_md": false, 00:20:56.104 "write_zeroes": true, 00:20:56.104 "zcopy": true, 00:20:56.104 "get_zone_info": false, 00:20:56.104 "zone_management": false, 00:20:56.104 "zone_append": false, 00:20:56.104 "compare": false, 00:20:56.104 "compare_and_write": false, 00:20:56.104 "abort": true, 00:20:56.104 "seek_hole": false, 00:20:56.104 "seek_data": false, 00:20:56.104 "copy": true, 00:20:56.104 "nvme_iov_md": false 00:20:56.104 }, 00:20:56.104 "memory_domains": [ 00:20:56.104 { 00:20:56.104 "dma_device_id": "system", 00:20:56.104 "dma_device_type": 1 00:20:56.104 }, 00:20:56.104 { 00:20:56.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.104 "dma_device_type": 2 00:20:56.104 } 00:20:56.104 ], 00:20:56.104 "driver_specific": {} 00:20:56.104 } 00:20:56.104 ] 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:56.104 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.105 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.364 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.364 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.364 "name": "Existed_Raid", 00:20:56.364 "uuid": "e4d7108b-8974-4b00-b8e4-5da8eb4199ac", 00:20:56.364 "strip_size_kb": 0, 00:20:56.364 "state": "online", 00:20:56.364 "raid_level": "raid1", 00:20:56.364 "superblock": true, 00:20:56.364 "num_base_bdevs": 2, 00:20:56.364 "num_base_bdevs_discovered": 2, 00:20:56.364 "num_base_bdevs_operational": 2, 00:20:56.364 "base_bdevs_list": [ 00:20:56.364 { 00:20:56.364 "name": "BaseBdev1", 00:20:56.364 "uuid": "0a18577b-df53-4acf-a3ec-ae260b6b502b", 00:20:56.364 "is_configured": true, 00:20:56.364 "data_offset": 256, 00:20:56.364 "data_size": 7936 00:20:56.364 }, 00:20:56.364 { 00:20:56.364 "name": "BaseBdev2", 00:20:56.364 "uuid": "aaae13bc-7ff6-4078-ac78-c4b5042586ba", 00:20:56.364 "is_configured": true, 00:20:56.364 "data_offset": 256, 00:20:56.364 "data_size": 7936 00:20:56.364 } 00:20:56.364 ] 00:20:56.364 }' 00:20:56.364 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.364 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:56.641 [2024-11-20 07:17:38.779576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:56.641 "name": "Existed_Raid", 00:20:56.641 "aliases": [ 00:20:56.641 "e4d7108b-8974-4b00-b8e4-5da8eb4199ac" 00:20:56.641 ], 00:20:56.641 "product_name": "Raid Volume", 00:20:56.641 "block_size": 4096, 00:20:56.641 "num_blocks": 7936, 00:20:56.641 "uuid": "e4d7108b-8974-4b00-b8e4-5da8eb4199ac", 00:20:56.641 "assigned_rate_limits": { 00:20:56.641 "rw_ios_per_sec": 0, 00:20:56.641 "rw_mbytes_per_sec": 0, 00:20:56.641 "r_mbytes_per_sec": 0, 00:20:56.641 "w_mbytes_per_sec": 0 00:20:56.641 }, 00:20:56.641 "claimed": false, 00:20:56.641 "zoned": false, 00:20:56.641 "supported_io_types": { 00:20:56.641 "read": true, 00:20:56.641 "write": true, 00:20:56.641 "unmap": false, 00:20:56.641 "flush": false, 00:20:56.641 "reset": true, 00:20:56.641 "nvme_admin": false, 00:20:56.641 "nvme_io": false, 00:20:56.641 "nvme_io_md": false, 00:20:56.641 "write_zeroes": true, 00:20:56.641 "zcopy": false, 00:20:56.641 "get_zone_info": false, 00:20:56.641 "zone_management": false, 00:20:56.641 "zone_append": false, 00:20:56.641 "compare": false, 00:20:56.641 "compare_and_write": false, 00:20:56.641 "abort": false, 00:20:56.641 "seek_hole": false, 00:20:56.641 "seek_data": false, 00:20:56.641 "copy": false, 00:20:56.641 "nvme_iov_md": false 00:20:56.641 }, 00:20:56.641 "memory_domains": [ 00:20:56.641 { 00:20:56.641 "dma_device_id": "system", 00:20:56.641 "dma_device_type": 1 00:20:56.641 }, 00:20:56.641 { 00:20:56.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.641 "dma_device_type": 2 00:20:56.641 }, 00:20:56.641 { 00:20:56.641 "dma_device_id": "system", 00:20:56.641 "dma_device_type": 1 00:20:56.641 }, 00:20:56.641 { 00:20:56.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.641 "dma_device_type": 2 00:20:56.641 } 00:20:56.641 ], 00:20:56.641 "driver_specific": { 00:20:56.641 "raid": { 00:20:56.641 "uuid": "e4d7108b-8974-4b00-b8e4-5da8eb4199ac", 00:20:56.641 "strip_size_kb": 0, 00:20:56.641 "state": "online", 00:20:56.641 "raid_level": "raid1", 00:20:56.641 "superblock": true, 00:20:56.641 "num_base_bdevs": 2, 00:20:56.641 "num_base_bdevs_discovered": 2, 00:20:56.641 "num_base_bdevs_operational": 2, 00:20:56.641 "base_bdevs_list": [ 00:20:56.641 { 00:20:56.641 "name": "BaseBdev1", 00:20:56.641 "uuid": "0a18577b-df53-4acf-a3ec-ae260b6b502b", 00:20:56.641 "is_configured": true, 00:20:56.641 "data_offset": 256, 00:20:56.641 "data_size": 7936 00:20:56.641 }, 00:20:56.641 { 00:20:56.641 "name": "BaseBdev2", 00:20:56.641 "uuid": "aaae13bc-7ff6-4078-ac78-c4b5042586ba", 00:20:56.641 "is_configured": true, 00:20:56.641 "data_offset": 256, 00:20:56.641 "data_size": 7936 00:20:56.641 } 00:20:56.641 ] 00:20:56.641 } 00:20:56.641 } 00:20:56.641 }' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:56.641 BaseBdev2' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.641 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.899 07:17:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.899 [2024-11-20 07:17:38.966959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.899 "name": "Existed_Raid", 00:20:56.899 "uuid": "e4d7108b-8974-4b00-b8e4-5da8eb4199ac", 00:20:56.899 "strip_size_kb": 0, 00:20:56.899 "state": "online", 00:20:56.899 "raid_level": "raid1", 00:20:56.899 "superblock": true, 00:20:56.899 "num_base_bdevs": 2, 00:20:56.899 "num_base_bdevs_discovered": 1, 00:20:56.899 "num_base_bdevs_operational": 1, 00:20:56.899 "base_bdevs_list": [ 00:20:56.899 { 00:20:56.899 "name": null, 00:20:56.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.899 "is_configured": false, 00:20:56.899 "data_offset": 0, 00:20:56.899 "data_size": 7936 00:20:56.899 }, 00:20:56.899 { 00:20:56.899 "name": "BaseBdev2", 00:20:56.899 "uuid": "aaae13bc-7ff6-4078-ac78-c4b5042586ba", 00:20:56.899 "is_configured": true, 00:20:56.899 "data_offset": 256, 00:20:56.899 "data_size": 7936 00:20:56.899 } 00:20:56.899 ] 00:20:56.899 }' 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.899 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 [2024-11-20 07:17:39.555234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:57.465 [2024-11-20 07:17:39.555540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.465 [2024-11-20 07:17:39.671888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.465 [2024-11-20 07:17:39.672049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.465 [2024-11-20 07:17:39.672104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86453 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86453 ']' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86453 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.465 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86453 00:20:57.723 killing process with pid 86453 00:20:57.723 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.723 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.723 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86453' 00:20:57.723 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86453 00:20:57.723 [2024-11-20 07:17:39.746405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.723 07:17:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86453 00:20:57.723 [2024-11-20 07:17:39.766982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:59.098 07:17:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:59.098 00:20:59.098 real 0m5.318s 00:20:59.098 user 0m7.571s 00:20:59.098 sys 0m0.776s 00:20:59.098 07:17:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.098 ************************************ 00:20:59.098 END TEST raid_state_function_test_sb_4k 00:20:59.098 ************************************ 00:20:59.098 07:17:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.098 07:17:41 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:59.098 07:17:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:59.098 07:17:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.098 07:17:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:59.098 ************************************ 00:20:59.098 START TEST raid_superblock_test_4k 00:20:59.098 ************************************ 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86700 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86700 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86700 ']' 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.098 07:17:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.098 [2024-11-20 07:17:41.258182] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:59.098 [2024-11-20 07:17:41.258393] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86700 ] 00:20:59.355 [2024-11-20 07:17:41.437146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.355 [2024-11-20 07:17:41.550207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.614 [2024-11-20 07:17:41.759613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.614 [2024-11-20 07:17:41.759643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.182 malloc1 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.182 [2024-11-20 07:17:42.238950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:00.182 [2024-11-20 07:17:42.239091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.182 [2024-11-20 07:17:42.239148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:00.182 [2024-11-20 07:17:42.239193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.182 [2024-11-20 07:17:42.241635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.182 [2024-11-20 07:17:42.241716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:00.182 pt1 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.182 malloc2 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.182 [2024-11-20 07:17:42.300290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:00.182 [2024-11-20 07:17:42.300438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.182 [2024-11-20 07:17:42.300490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:00.182 [2024-11-20 07:17:42.300542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.182 [2024-11-20 07:17:42.302823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.182 [2024-11-20 07:17:42.302894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:00.182 pt2 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:00.182 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.183 [2024-11-20 07:17:42.312341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:00.183 [2024-11-20 07:17:42.314398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:00.183 [2024-11-20 07:17:42.314574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:00.183 [2024-11-20 07:17:42.314592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:00.183 [2024-11-20 07:17:42.314825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:00.183 [2024-11-20 07:17:42.314998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:00.183 [2024-11-20 07:17:42.315013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:00.183 [2024-11-20 07:17:42.315170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.183 "name": "raid_bdev1", 00:21:00.183 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:00.183 "strip_size_kb": 0, 00:21:00.183 "state": "online", 00:21:00.183 "raid_level": "raid1", 00:21:00.183 "superblock": true, 00:21:00.183 "num_base_bdevs": 2, 00:21:00.183 "num_base_bdevs_discovered": 2, 00:21:00.183 "num_base_bdevs_operational": 2, 00:21:00.183 "base_bdevs_list": [ 00:21:00.183 { 00:21:00.183 "name": "pt1", 00:21:00.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.183 "is_configured": true, 00:21:00.183 "data_offset": 256, 00:21:00.183 "data_size": 7936 00:21:00.183 }, 00:21:00.183 { 00:21:00.183 "name": "pt2", 00:21:00.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.183 "is_configured": true, 00:21:00.183 "data_offset": 256, 00:21:00.183 "data_size": 7936 00:21:00.183 } 00:21:00.183 ] 00:21:00.183 }' 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.183 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:00.751 [2024-11-20 07:17:42.795800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.751 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.751 "name": "raid_bdev1", 00:21:00.751 "aliases": [ 00:21:00.751 "4ba88581-f354-42a8-be42-cadd292bdd36" 00:21:00.751 ], 00:21:00.751 "product_name": "Raid Volume", 00:21:00.751 "block_size": 4096, 00:21:00.751 "num_blocks": 7936, 00:21:00.751 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:00.751 "assigned_rate_limits": { 00:21:00.751 "rw_ios_per_sec": 0, 00:21:00.751 "rw_mbytes_per_sec": 0, 00:21:00.751 "r_mbytes_per_sec": 0, 00:21:00.751 "w_mbytes_per_sec": 0 00:21:00.751 }, 00:21:00.751 "claimed": false, 00:21:00.751 "zoned": false, 00:21:00.751 "supported_io_types": { 00:21:00.752 "read": true, 00:21:00.752 "write": true, 00:21:00.752 "unmap": false, 00:21:00.752 "flush": false, 00:21:00.752 "reset": true, 00:21:00.752 "nvme_admin": false, 00:21:00.752 "nvme_io": false, 00:21:00.752 "nvme_io_md": false, 00:21:00.752 "write_zeroes": true, 00:21:00.752 "zcopy": false, 00:21:00.752 "get_zone_info": false, 00:21:00.752 "zone_management": false, 00:21:00.752 "zone_append": false, 00:21:00.752 "compare": false, 00:21:00.752 "compare_and_write": false, 00:21:00.752 "abort": false, 00:21:00.752 "seek_hole": false, 00:21:00.752 "seek_data": false, 00:21:00.752 "copy": false, 00:21:00.752 "nvme_iov_md": false 00:21:00.752 }, 00:21:00.752 "memory_domains": [ 00:21:00.752 { 00:21:00.752 "dma_device_id": "system", 00:21:00.752 "dma_device_type": 1 00:21:00.752 }, 00:21:00.752 { 00:21:00.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.752 "dma_device_type": 2 00:21:00.752 }, 00:21:00.752 { 00:21:00.752 "dma_device_id": "system", 00:21:00.752 "dma_device_type": 1 00:21:00.752 }, 00:21:00.752 { 00:21:00.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.752 "dma_device_type": 2 00:21:00.752 } 00:21:00.752 ], 00:21:00.752 "driver_specific": { 00:21:00.752 "raid": { 00:21:00.752 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:00.752 "strip_size_kb": 0, 00:21:00.752 "state": "online", 00:21:00.752 "raid_level": "raid1", 00:21:00.752 "superblock": true, 00:21:00.752 "num_base_bdevs": 2, 00:21:00.752 "num_base_bdevs_discovered": 2, 00:21:00.752 "num_base_bdevs_operational": 2, 00:21:00.752 "base_bdevs_list": [ 00:21:00.752 { 00:21:00.752 "name": "pt1", 00:21:00.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.752 "is_configured": true, 00:21:00.752 "data_offset": 256, 00:21:00.752 "data_size": 7936 00:21:00.752 }, 00:21:00.752 { 00:21:00.752 "name": "pt2", 00:21:00.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.752 "is_configured": true, 00:21:00.752 "data_offset": 256, 00:21:00.752 "data_size": 7936 00:21:00.752 } 00:21:00.752 ] 00:21:00.752 } 00:21:00.752 } 00:21:00.752 }' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:00.752 pt2' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:00.752 [2024-11-20 07:17:42.975539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.752 07:17:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.752 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4ba88581-f354-42a8-be42-cadd292bdd36 00:21:00.752 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4ba88581-f354-42a8-be42-cadd292bdd36 ']' 00:21:00.752 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:00.752 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.752 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 [2024-11-20 07:17:43.015131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.012 [2024-11-20 07:17:43.015162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.012 [2024-11-20 07:17:43.015256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.012 [2024-11-20 07:17:43.015324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.012 [2024-11-20 07:17:43.015363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 [2024-11-20 07:17:43.111034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:01.012 [2024-11-20 07:17:43.113664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:01.012 [2024-11-20 07:17:43.113854] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:01.012 [2024-11-20 07:17:43.113968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:01.012 [2024-11-20 07:17:43.113997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.012 [2024-11-20 07:17:43.114015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:01.012 request: 00:21:01.012 { 00:21:01.012 "name": "raid_bdev1", 00:21:01.012 "raid_level": "raid1", 00:21:01.012 "base_bdevs": [ 00:21:01.012 "malloc1", 00:21:01.012 "malloc2" 00:21:01.012 ], 00:21:01.012 "superblock": false, 00:21:01.012 "method": "bdev_raid_create", 00:21:01.012 "req_id": 1 00:21:01.012 } 00:21:01.012 Got JSON-RPC error response 00:21:01.012 response: 00:21:01.012 { 00:21:01.012 "code": -17, 00:21:01.012 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:01.012 } 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:01.012 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.013 [2024-11-20 07:17:43.166890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.013 [2024-11-20 07:17:43.167001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.013 [2024-11-20 07:17:43.167038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:01.013 [2024-11-20 07:17:43.167069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.013 [2024-11-20 07:17:43.169571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.013 [2024-11-20 07:17:43.169670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.013 [2024-11-20 07:17:43.169792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:01.013 [2024-11-20 07:17:43.169904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:01.013 pt1 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.013 "name": "raid_bdev1", 00:21:01.013 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:01.013 "strip_size_kb": 0, 00:21:01.013 "state": "configuring", 00:21:01.013 "raid_level": "raid1", 00:21:01.013 "superblock": true, 00:21:01.013 "num_base_bdevs": 2, 00:21:01.013 "num_base_bdevs_discovered": 1, 00:21:01.013 "num_base_bdevs_operational": 2, 00:21:01.013 "base_bdevs_list": [ 00:21:01.013 { 00:21:01.013 "name": "pt1", 00:21:01.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.013 "is_configured": true, 00:21:01.013 "data_offset": 256, 00:21:01.013 "data_size": 7936 00:21:01.013 }, 00:21:01.013 { 00:21:01.013 "name": null, 00:21:01.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.013 "is_configured": false, 00:21:01.013 "data_offset": 256, 00:21:01.013 "data_size": 7936 00:21:01.013 } 00:21:01.013 ] 00:21:01.013 }' 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.013 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.646 [2024-11-20 07:17:43.634127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:01.646 [2024-11-20 07:17:43.634250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.646 [2024-11-20 07:17:43.634278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:01.646 [2024-11-20 07:17:43.634290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.646 [2024-11-20 07:17:43.634765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.646 [2024-11-20 07:17:43.634793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:01.646 [2024-11-20 07:17:43.634877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:01.646 [2024-11-20 07:17:43.634903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:01.646 [2024-11-20 07:17:43.635032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:01.646 [2024-11-20 07:17:43.635043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:01.646 [2024-11-20 07:17:43.635272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:01.646 [2024-11-20 07:17:43.635450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:01.646 [2024-11-20 07:17:43.635462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:01.646 [2024-11-20 07:17:43.635610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.646 pt2 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.646 "name": "raid_bdev1", 00:21:01.646 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:01.646 "strip_size_kb": 0, 00:21:01.646 "state": "online", 00:21:01.646 "raid_level": "raid1", 00:21:01.646 "superblock": true, 00:21:01.646 "num_base_bdevs": 2, 00:21:01.646 "num_base_bdevs_discovered": 2, 00:21:01.646 "num_base_bdevs_operational": 2, 00:21:01.646 "base_bdevs_list": [ 00:21:01.646 { 00:21:01.646 "name": "pt1", 00:21:01.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.646 "is_configured": true, 00:21:01.646 "data_offset": 256, 00:21:01.646 "data_size": 7936 00:21:01.646 }, 00:21:01.646 { 00:21:01.646 "name": "pt2", 00:21:01.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.646 "is_configured": true, 00:21:01.646 "data_offset": 256, 00:21:01.646 "data_size": 7936 00:21:01.646 } 00:21:01.646 ] 00:21:01.646 }' 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.646 07:17:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.906 [2024-11-20 07:17:44.101663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.906 "name": "raid_bdev1", 00:21:01.906 "aliases": [ 00:21:01.906 "4ba88581-f354-42a8-be42-cadd292bdd36" 00:21:01.906 ], 00:21:01.906 "product_name": "Raid Volume", 00:21:01.906 "block_size": 4096, 00:21:01.906 "num_blocks": 7936, 00:21:01.906 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:01.906 "assigned_rate_limits": { 00:21:01.906 "rw_ios_per_sec": 0, 00:21:01.906 "rw_mbytes_per_sec": 0, 00:21:01.906 "r_mbytes_per_sec": 0, 00:21:01.906 "w_mbytes_per_sec": 0 00:21:01.906 }, 00:21:01.906 "claimed": false, 00:21:01.906 "zoned": false, 00:21:01.906 "supported_io_types": { 00:21:01.906 "read": true, 00:21:01.906 "write": true, 00:21:01.906 "unmap": false, 00:21:01.906 "flush": false, 00:21:01.906 "reset": true, 00:21:01.906 "nvme_admin": false, 00:21:01.906 "nvme_io": false, 00:21:01.906 "nvme_io_md": false, 00:21:01.906 "write_zeroes": true, 00:21:01.906 "zcopy": false, 00:21:01.906 "get_zone_info": false, 00:21:01.906 "zone_management": false, 00:21:01.906 "zone_append": false, 00:21:01.906 "compare": false, 00:21:01.906 "compare_and_write": false, 00:21:01.906 "abort": false, 00:21:01.906 "seek_hole": false, 00:21:01.906 "seek_data": false, 00:21:01.906 "copy": false, 00:21:01.906 "nvme_iov_md": false 00:21:01.906 }, 00:21:01.906 "memory_domains": [ 00:21:01.906 { 00:21:01.906 "dma_device_id": "system", 00:21:01.906 "dma_device_type": 1 00:21:01.906 }, 00:21:01.906 { 00:21:01.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.906 "dma_device_type": 2 00:21:01.906 }, 00:21:01.906 { 00:21:01.906 "dma_device_id": "system", 00:21:01.906 "dma_device_type": 1 00:21:01.906 }, 00:21:01.906 { 00:21:01.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.906 "dma_device_type": 2 00:21:01.906 } 00:21:01.906 ], 00:21:01.906 "driver_specific": { 00:21:01.906 "raid": { 00:21:01.906 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:01.906 "strip_size_kb": 0, 00:21:01.906 "state": "online", 00:21:01.906 "raid_level": "raid1", 00:21:01.906 "superblock": true, 00:21:01.906 "num_base_bdevs": 2, 00:21:01.906 "num_base_bdevs_discovered": 2, 00:21:01.906 "num_base_bdevs_operational": 2, 00:21:01.906 "base_bdevs_list": [ 00:21:01.906 { 00:21:01.906 "name": "pt1", 00:21:01.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.906 "is_configured": true, 00:21:01.906 "data_offset": 256, 00:21:01.906 "data_size": 7936 00:21:01.906 }, 00:21:01.906 { 00:21:01.906 "name": "pt2", 00:21:01.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.906 "is_configured": true, 00:21:01.906 "data_offset": 256, 00:21:01.906 "data_size": 7936 00:21:01.906 } 00:21:01.906 ] 00:21:01.906 } 00:21:01.906 } 00:21:01.906 }' 00:21:01.906 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.165 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:02.165 pt2' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:02.166 [2024-11-20 07:17:44.293403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4ba88581-f354-42a8-be42-cadd292bdd36 '!=' 4ba88581-f354-42a8-be42-cadd292bdd36 ']' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.166 [2024-11-20 07:17:44.337012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.166 "name": "raid_bdev1", 00:21:02.166 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:02.166 "strip_size_kb": 0, 00:21:02.166 "state": "online", 00:21:02.166 "raid_level": "raid1", 00:21:02.166 "superblock": true, 00:21:02.166 "num_base_bdevs": 2, 00:21:02.166 "num_base_bdevs_discovered": 1, 00:21:02.166 "num_base_bdevs_operational": 1, 00:21:02.166 "base_bdevs_list": [ 00:21:02.166 { 00:21:02.166 "name": null, 00:21:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.166 "is_configured": false, 00:21:02.166 "data_offset": 0, 00:21:02.166 "data_size": 7936 00:21:02.166 }, 00:21:02.166 { 00:21:02.166 "name": "pt2", 00:21:02.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.166 "is_configured": true, 00:21:02.166 "data_offset": 256, 00:21:02.166 "data_size": 7936 00:21:02.166 } 00:21:02.166 ] 00:21:02.166 }' 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.166 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 [2024-11-20 07:17:44.756341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.735 [2024-11-20 07:17:44.756433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.735 [2024-11-20 07:17:44.756549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.735 [2024-11-20 07:17:44.756619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.735 [2024-11-20 07:17:44.756673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 [2024-11-20 07:17:44.832172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.735 [2024-11-20 07:17:44.832243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.735 [2024-11-20 07:17:44.832265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:02.735 [2024-11-20 07:17:44.832277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.735 [2024-11-20 07:17:44.834784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.735 [2024-11-20 07:17:44.834877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.735 [2024-11-20 07:17:44.834985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:02.735 [2024-11-20 07:17:44.835044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.735 [2024-11-20 07:17:44.835161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:02.735 [2024-11-20 07:17:44.835176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.735 [2024-11-20 07:17:44.835455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:02.735 [2024-11-20 07:17:44.835626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:02.735 [2024-11-20 07:17:44.835643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:02.735 [2024-11-20 07:17:44.835796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.735 pt2 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.735 "name": "raid_bdev1", 00:21:02.735 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:02.735 "strip_size_kb": 0, 00:21:02.735 "state": "online", 00:21:02.735 "raid_level": "raid1", 00:21:02.735 "superblock": true, 00:21:02.735 "num_base_bdevs": 2, 00:21:02.735 "num_base_bdevs_discovered": 1, 00:21:02.735 "num_base_bdevs_operational": 1, 00:21:02.735 "base_bdevs_list": [ 00:21:02.735 { 00:21:02.735 "name": null, 00:21:02.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.735 "is_configured": false, 00:21:02.735 "data_offset": 256, 00:21:02.735 "data_size": 7936 00:21:02.735 }, 00:21:02.735 { 00:21:02.735 "name": "pt2", 00:21:02.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.735 "is_configured": true, 00:21:02.735 "data_offset": 256, 00:21:02.735 "data_size": 7936 00:21:02.735 } 00:21:02.735 ] 00:21:02.735 }' 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.735 07:17:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.304 [2024-11-20 07:17:45.283407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.304 [2024-11-20 07:17:45.283438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.304 [2024-11-20 07:17:45.283517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.304 [2024-11-20 07:17:45.283570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.304 [2024-11-20 07:17:45.283579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.304 [2024-11-20 07:17:45.343322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:03.304 [2024-11-20 07:17:45.343400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.304 [2024-11-20 07:17:45.343422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:03.304 [2024-11-20 07:17:45.343431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.304 [2024-11-20 07:17:45.346049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.304 [2024-11-20 07:17:45.346091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:03.304 [2024-11-20 07:17:45.346198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:03.304 [2024-11-20 07:17:45.346244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.304 [2024-11-20 07:17:45.346408] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:03.304 [2024-11-20 07:17:45.346423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.304 [2024-11-20 07:17:45.346440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:03.304 [2024-11-20 07:17:45.346515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.304 [2024-11-20 07:17:45.346605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:03.304 [2024-11-20 07:17:45.346613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:03.304 [2024-11-20 07:17:45.346857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:03.304 [2024-11-20 07:17:45.347003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:03.304 [2024-11-20 07:17:45.347015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:03.304 [2024-11-20 07:17:45.347247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.304 pt1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.304 "name": "raid_bdev1", 00:21:03.304 "uuid": "4ba88581-f354-42a8-be42-cadd292bdd36", 00:21:03.304 "strip_size_kb": 0, 00:21:03.304 "state": "online", 00:21:03.304 "raid_level": "raid1", 00:21:03.304 "superblock": true, 00:21:03.304 "num_base_bdevs": 2, 00:21:03.304 "num_base_bdevs_discovered": 1, 00:21:03.304 "num_base_bdevs_operational": 1, 00:21:03.304 "base_bdevs_list": [ 00:21:03.304 { 00:21:03.304 "name": null, 00:21:03.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.304 "is_configured": false, 00:21:03.304 "data_offset": 256, 00:21:03.304 "data_size": 7936 00:21:03.304 }, 00:21:03.304 { 00:21:03.304 "name": "pt2", 00:21:03.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.304 "is_configured": true, 00:21:03.304 "data_offset": 256, 00:21:03.304 "data_size": 7936 00:21:03.304 } 00:21:03.304 ] 00:21:03.304 }' 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.304 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.563 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:03.563 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.563 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.563 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:03.563 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.822 [2024-11-20 07:17:45.874696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4ba88581-f354-42a8-be42-cadd292bdd36 '!=' 4ba88581-f354-42a8-be42-cadd292bdd36 ']' 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86700 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86700 ']' 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86700 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86700 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86700' 00:21:03.822 killing process with pid 86700 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86700 00:21:03.822 [2024-11-20 07:17:45.953267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.822 [2024-11-20 07:17:45.953389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.822 [2024-11-20 07:17:45.953445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.822 [2024-11-20 07:17:45.953461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:03.822 07:17:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86700 00:21:04.082 [2024-11-20 07:17:46.170766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.462 07:17:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:05.462 00:21:05.462 real 0m6.143s 00:21:05.462 user 0m9.257s 00:21:05.462 sys 0m1.065s 00:21:05.462 07:17:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.462 07:17:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.462 ************************************ 00:21:05.462 END TEST raid_superblock_test_4k 00:21:05.462 ************************************ 00:21:05.462 07:17:47 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:05.462 07:17:47 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:05.462 07:17:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:05.462 07:17:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.462 07:17:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.462 ************************************ 00:21:05.462 START TEST raid_rebuild_test_sb_4k 00:21:05.462 ************************************ 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87028 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87028 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87028 ']' 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.462 07:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.462 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:05.462 Zero copy mechanism will not be used. 00:21:05.462 [2024-11-20 07:17:47.471944] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:05.462 [2024-11-20 07:17:47.472064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87028 ] 00:21:05.462 [2024-11-20 07:17:47.646751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.722 [2024-11-20 07:17:47.762675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.722 [2024-11-20 07:17:47.964212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.722 [2024-11-20 07:17:47.964243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.290 BaseBdev1_malloc 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.290 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.290 [2024-11-20 07:17:48.366590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.290 [2024-11-20 07:17:48.366658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.290 [2024-11-20 07:17:48.366703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:06.290 [2024-11-20 07:17:48.366714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.291 [2024-11-20 07:17:48.368851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.291 [2024-11-20 07:17:48.368944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.291 BaseBdev1 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 BaseBdev2_malloc 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 [2024-11-20 07:17:48.416596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:06.291 [2024-11-20 07:17:48.416671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.291 [2024-11-20 07:17:48.416701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:06.291 [2024-11-20 07:17:48.416719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.291 [2024-11-20 07:17:48.419641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.291 [2024-11-20 07:17:48.419701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:06.291 BaseBdev2 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 spare_malloc 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 spare_delay 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 [2024-11-20 07:17:48.489234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.291 [2024-11-20 07:17:48.489299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.291 [2024-11-20 07:17:48.489324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.291 [2024-11-20 07:17:48.489357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.291 [2024-11-20 07:17:48.491606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.291 [2024-11-20 07:17:48.491646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.291 spare 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 [2024-11-20 07:17:48.497260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.291 [2024-11-20 07:17:48.499260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.291 [2024-11-20 07:17:48.499453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:06.291 [2024-11-20 07:17:48.499471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:06.291 [2024-11-20 07:17:48.499742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.291 [2024-11-20 07:17:48.499926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:06.291 [2024-11-20 07:17:48.499936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:06.291 [2024-11-20 07:17:48.500091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.291 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.550 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.550 "name": "raid_bdev1", 00:21:06.550 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:06.550 "strip_size_kb": 0, 00:21:06.550 "state": "online", 00:21:06.550 "raid_level": "raid1", 00:21:06.550 "superblock": true, 00:21:06.550 "num_base_bdevs": 2, 00:21:06.550 "num_base_bdevs_discovered": 2, 00:21:06.550 "num_base_bdevs_operational": 2, 00:21:06.550 "base_bdevs_list": [ 00:21:06.550 { 00:21:06.550 "name": "BaseBdev1", 00:21:06.550 "uuid": "4d31834d-1c3f-56e3-b0b7-74fecd53987f", 00:21:06.550 "is_configured": true, 00:21:06.550 "data_offset": 256, 00:21:06.550 "data_size": 7936 00:21:06.550 }, 00:21:06.550 { 00:21:06.550 "name": "BaseBdev2", 00:21:06.550 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:06.550 "is_configured": true, 00:21:06.550 "data_offset": 256, 00:21:06.550 "data_size": 7936 00:21:06.550 } 00:21:06.550 ] 00:21:06.550 }' 00:21:06.550 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.550 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.810 [2024-11-20 07:17:48.932863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.810 07:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.810 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:07.069 [2024-11-20 07:17:49.236117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:07.069 /dev/nbd0 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.069 1+0 records in 00:21:07.069 1+0 records out 00:21:07.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550637 s, 7.4 MB/s 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:07.069 07:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:08.006 7936+0 records in 00:21:08.006 7936+0 records out 00:21:08.006 32505856 bytes (33 MB, 31 MiB) copied, 0.722888 s, 45.0 MB/s 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.006 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:08.265 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.265 [2024-11-20 07:17:50.305722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.265 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.265 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.265 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.265 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.266 [2024-11-20 07:17:50.321819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.266 "name": "raid_bdev1", 00:21:08.266 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:08.266 "strip_size_kb": 0, 00:21:08.266 "state": "online", 00:21:08.266 "raid_level": "raid1", 00:21:08.266 "superblock": true, 00:21:08.266 "num_base_bdevs": 2, 00:21:08.266 "num_base_bdevs_discovered": 1, 00:21:08.266 "num_base_bdevs_operational": 1, 00:21:08.266 "base_bdevs_list": [ 00:21:08.266 { 00:21:08.266 "name": null, 00:21:08.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.266 "is_configured": false, 00:21:08.266 "data_offset": 0, 00:21:08.266 "data_size": 7936 00:21:08.266 }, 00:21:08.266 { 00:21:08.266 "name": "BaseBdev2", 00:21:08.266 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:08.266 "is_configured": true, 00:21:08.266 "data_offset": 256, 00:21:08.266 "data_size": 7936 00:21:08.266 } 00:21:08.266 ] 00:21:08.266 }' 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.266 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.836 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.836 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.836 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.836 [2024-11-20 07:17:50.828958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.836 [2024-11-20 07:17:50.846185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:08.836 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.836 07:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:08.836 [2024-11-20 07:17:50.848043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.780 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.780 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.781 "name": "raid_bdev1", 00:21:09.781 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:09.781 "strip_size_kb": 0, 00:21:09.781 "state": "online", 00:21:09.781 "raid_level": "raid1", 00:21:09.781 "superblock": true, 00:21:09.781 "num_base_bdevs": 2, 00:21:09.781 "num_base_bdevs_discovered": 2, 00:21:09.781 "num_base_bdevs_operational": 2, 00:21:09.781 "process": { 00:21:09.781 "type": "rebuild", 00:21:09.781 "target": "spare", 00:21:09.781 "progress": { 00:21:09.781 "blocks": 2560, 00:21:09.781 "percent": 32 00:21:09.781 } 00:21:09.781 }, 00:21:09.781 "base_bdevs_list": [ 00:21:09.781 { 00:21:09.781 "name": "spare", 00:21:09.781 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:09.781 "is_configured": true, 00:21:09.781 "data_offset": 256, 00:21:09.781 "data_size": 7936 00:21:09.781 }, 00:21:09.781 { 00:21:09.781 "name": "BaseBdev2", 00:21:09.781 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:09.781 "is_configured": true, 00:21:09.781 "data_offset": 256, 00:21:09.781 "data_size": 7936 00:21:09.781 } 00:21:09.781 ] 00:21:09.781 }' 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.781 07:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.781 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.781 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:09.781 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.781 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.781 [2024-11-20 07:17:52.007400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.049 [2024-11-20 07:17:52.053813] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:10.049 [2024-11-20 07:17:52.053881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.049 [2024-11-20 07:17:52.053898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.049 [2024-11-20 07:17:52.053909] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.049 "name": "raid_bdev1", 00:21:10.049 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:10.049 "strip_size_kb": 0, 00:21:10.049 "state": "online", 00:21:10.049 "raid_level": "raid1", 00:21:10.049 "superblock": true, 00:21:10.049 "num_base_bdevs": 2, 00:21:10.049 "num_base_bdevs_discovered": 1, 00:21:10.049 "num_base_bdevs_operational": 1, 00:21:10.049 "base_bdevs_list": [ 00:21:10.049 { 00:21:10.049 "name": null, 00:21:10.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.049 "is_configured": false, 00:21:10.049 "data_offset": 0, 00:21:10.049 "data_size": 7936 00:21:10.049 }, 00:21:10.049 { 00:21:10.049 "name": "BaseBdev2", 00:21:10.049 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:10.049 "is_configured": true, 00:21:10.049 "data_offset": 256, 00:21:10.049 "data_size": 7936 00:21:10.049 } 00:21:10.049 ] 00:21:10.049 }' 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.049 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.309 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.568 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.568 "name": "raid_bdev1", 00:21:10.568 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:10.568 "strip_size_kb": 0, 00:21:10.568 "state": "online", 00:21:10.568 "raid_level": "raid1", 00:21:10.568 "superblock": true, 00:21:10.568 "num_base_bdevs": 2, 00:21:10.568 "num_base_bdevs_discovered": 1, 00:21:10.568 "num_base_bdevs_operational": 1, 00:21:10.568 "base_bdevs_list": [ 00:21:10.568 { 00:21:10.569 "name": null, 00:21:10.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.569 "is_configured": false, 00:21:10.569 "data_offset": 0, 00:21:10.569 "data_size": 7936 00:21:10.569 }, 00:21:10.569 { 00:21:10.569 "name": "BaseBdev2", 00:21:10.569 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:10.569 "is_configured": true, 00:21:10.569 "data_offset": 256, 00:21:10.569 "data_size": 7936 00:21:10.569 } 00:21:10.569 ] 00:21:10.569 }' 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.569 [2024-11-20 07:17:52.641608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.569 [2024-11-20 07:17:52.659542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.569 07:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:10.569 [2024-11-20 07:17:52.661676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.507 "name": "raid_bdev1", 00:21:11.507 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:11.507 "strip_size_kb": 0, 00:21:11.507 "state": "online", 00:21:11.507 "raid_level": "raid1", 00:21:11.507 "superblock": true, 00:21:11.507 "num_base_bdevs": 2, 00:21:11.507 "num_base_bdevs_discovered": 2, 00:21:11.507 "num_base_bdevs_operational": 2, 00:21:11.507 "process": { 00:21:11.507 "type": "rebuild", 00:21:11.507 "target": "spare", 00:21:11.507 "progress": { 00:21:11.507 "blocks": 2560, 00:21:11.507 "percent": 32 00:21:11.507 } 00:21:11.507 }, 00:21:11.507 "base_bdevs_list": [ 00:21:11.507 { 00:21:11.507 "name": "spare", 00:21:11.507 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:11.507 "is_configured": true, 00:21:11.507 "data_offset": 256, 00:21:11.507 "data_size": 7936 00:21:11.507 }, 00:21:11.507 { 00:21:11.507 "name": "BaseBdev2", 00:21:11.507 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:11.507 "is_configured": true, 00:21:11.507 "data_offset": 256, 00:21:11.507 "data_size": 7936 00:21:11.507 } 00:21:11.507 ] 00:21:11.507 }' 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.507 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:11.768 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=709 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.768 "name": "raid_bdev1", 00:21:11.768 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:11.768 "strip_size_kb": 0, 00:21:11.768 "state": "online", 00:21:11.768 "raid_level": "raid1", 00:21:11.768 "superblock": true, 00:21:11.768 "num_base_bdevs": 2, 00:21:11.768 "num_base_bdevs_discovered": 2, 00:21:11.768 "num_base_bdevs_operational": 2, 00:21:11.768 "process": { 00:21:11.768 "type": "rebuild", 00:21:11.768 "target": "spare", 00:21:11.768 "progress": { 00:21:11.768 "blocks": 2816, 00:21:11.768 "percent": 35 00:21:11.768 } 00:21:11.768 }, 00:21:11.768 "base_bdevs_list": [ 00:21:11.768 { 00:21:11.768 "name": "spare", 00:21:11.768 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:11.768 "is_configured": true, 00:21:11.768 "data_offset": 256, 00:21:11.768 "data_size": 7936 00:21:11.768 }, 00:21:11.768 { 00:21:11.768 "name": "BaseBdev2", 00:21:11.768 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:11.768 "is_configured": true, 00:21:11.768 "data_offset": 256, 00:21:11.768 "data_size": 7936 00:21:11.768 } 00:21:11.768 ] 00:21:11.768 }' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.768 07:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:13.145 07:17:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.146 "name": "raid_bdev1", 00:21:13.146 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:13.146 "strip_size_kb": 0, 00:21:13.146 "state": "online", 00:21:13.146 "raid_level": "raid1", 00:21:13.146 "superblock": true, 00:21:13.146 "num_base_bdevs": 2, 00:21:13.146 "num_base_bdevs_discovered": 2, 00:21:13.146 "num_base_bdevs_operational": 2, 00:21:13.146 "process": { 00:21:13.146 "type": "rebuild", 00:21:13.146 "target": "spare", 00:21:13.146 "progress": { 00:21:13.146 "blocks": 5888, 00:21:13.146 "percent": 74 00:21:13.146 } 00:21:13.146 }, 00:21:13.146 "base_bdevs_list": [ 00:21:13.146 { 00:21:13.146 "name": "spare", 00:21:13.146 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:13.146 "is_configured": true, 00:21:13.146 "data_offset": 256, 00:21:13.146 "data_size": 7936 00:21:13.146 }, 00:21:13.146 { 00:21:13.146 "name": "BaseBdev2", 00:21:13.146 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:13.146 "is_configured": true, 00:21:13.146 "data_offset": 256, 00:21:13.146 "data_size": 7936 00:21:13.146 } 00:21:13.146 ] 00:21:13.146 }' 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.146 07:17:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.714 [2024-11-20 07:17:55.775933] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:13.714 [2024-11-20 07:17:55.776131] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:13.714 [2024-11-20 07:17:55.776315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.973 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.974 "name": "raid_bdev1", 00:21:13.974 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:13.974 "strip_size_kb": 0, 00:21:13.974 "state": "online", 00:21:13.974 "raid_level": "raid1", 00:21:13.974 "superblock": true, 00:21:13.974 "num_base_bdevs": 2, 00:21:13.974 "num_base_bdevs_discovered": 2, 00:21:13.974 "num_base_bdevs_operational": 2, 00:21:13.974 "base_bdevs_list": [ 00:21:13.974 { 00:21:13.974 "name": "spare", 00:21:13.974 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:13.974 "is_configured": true, 00:21:13.974 "data_offset": 256, 00:21:13.974 "data_size": 7936 00:21:13.974 }, 00:21:13.974 { 00:21:13.974 "name": "BaseBdev2", 00:21:13.974 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:13.974 "is_configured": true, 00:21:13.974 "data_offset": 256, 00:21:13.974 "data_size": 7936 00:21:13.974 } 00:21:13.974 ] 00:21:13.974 }' 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:13.974 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.243 "name": "raid_bdev1", 00:21:14.243 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:14.243 "strip_size_kb": 0, 00:21:14.243 "state": "online", 00:21:14.243 "raid_level": "raid1", 00:21:14.243 "superblock": true, 00:21:14.243 "num_base_bdevs": 2, 00:21:14.243 "num_base_bdevs_discovered": 2, 00:21:14.243 "num_base_bdevs_operational": 2, 00:21:14.243 "base_bdevs_list": [ 00:21:14.243 { 00:21:14.243 "name": "spare", 00:21:14.243 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:14.243 "is_configured": true, 00:21:14.243 "data_offset": 256, 00:21:14.243 "data_size": 7936 00:21:14.243 }, 00:21:14.243 { 00:21:14.243 "name": "BaseBdev2", 00:21:14.243 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:14.243 "is_configured": true, 00:21:14.243 "data_offset": 256, 00:21:14.243 "data_size": 7936 00:21:14.243 } 00:21:14.243 ] 00:21:14.243 }' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.243 "name": "raid_bdev1", 00:21:14.243 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:14.243 "strip_size_kb": 0, 00:21:14.243 "state": "online", 00:21:14.243 "raid_level": "raid1", 00:21:14.243 "superblock": true, 00:21:14.243 "num_base_bdevs": 2, 00:21:14.243 "num_base_bdevs_discovered": 2, 00:21:14.243 "num_base_bdevs_operational": 2, 00:21:14.243 "base_bdevs_list": [ 00:21:14.243 { 00:21:14.243 "name": "spare", 00:21:14.243 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:14.243 "is_configured": true, 00:21:14.243 "data_offset": 256, 00:21:14.243 "data_size": 7936 00:21:14.243 }, 00:21:14.243 { 00:21:14.243 "name": "BaseBdev2", 00:21:14.243 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:14.243 "is_configured": true, 00:21:14.243 "data_offset": 256, 00:21:14.243 "data_size": 7936 00:21:14.243 } 00:21:14.243 ] 00:21:14.243 }' 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.243 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.835 [2024-11-20 07:17:56.806792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.835 [2024-11-20 07:17:56.806868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.835 [2024-11-20 07:17:56.806998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.835 [2024-11-20 07:17:56.807092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.835 [2024-11-20 07:17:56.807105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:14.835 07:17:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:14.835 /dev/nbd0 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:15.094 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.095 1+0 records in 00:21:15.095 1+0 records out 00:21:15.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362274 s, 11.3 MB/s 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.095 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:15.095 /dev/nbd1 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.355 1+0 records in 00:21:15.355 1+0 records out 00:21:15.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291856 s, 14.0 MB/s 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.355 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.614 07:17:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.874 [2024-11-20 07:17:58.049082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:15.874 [2024-11-20 07:17:58.049145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.874 [2024-11-20 07:17:58.049173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:15.874 [2024-11-20 07:17:58.049183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.874 [2024-11-20 07:17:58.051516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.874 [2024-11-20 07:17:58.051618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:15.874 [2024-11-20 07:17:58.051745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:15.874 [2024-11-20 07:17:58.051813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.874 [2024-11-20 07:17:58.052017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.874 spare 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.874 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.134 [2024-11-20 07:17:58.151936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:16.134 [2024-11-20 07:17:58.152008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:16.134 [2024-11-20 07:17:58.152296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:16.134 [2024-11-20 07:17:58.152485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:16.134 [2024-11-20 07:17:58.152499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:16.134 [2024-11-20 07:17:58.152680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.134 "name": "raid_bdev1", 00:21:16.134 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:16.134 "strip_size_kb": 0, 00:21:16.134 "state": "online", 00:21:16.134 "raid_level": "raid1", 00:21:16.134 "superblock": true, 00:21:16.134 "num_base_bdevs": 2, 00:21:16.134 "num_base_bdevs_discovered": 2, 00:21:16.134 "num_base_bdevs_operational": 2, 00:21:16.134 "base_bdevs_list": [ 00:21:16.134 { 00:21:16.134 "name": "spare", 00:21:16.134 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:16.134 "is_configured": true, 00:21:16.134 "data_offset": 256, 00:21:16.134 "data_size": 7936 00:21:16.134 }, 00:21:16.134 { 00:21:16.134 "name": "BaseBdev2", 00:21:16.134 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:16.134 "is_configured": true, 00:21:16.134 "data_offset": 256, 00:21:16.134 "data_size": 7936 00:21:16.134 } 00:21:16.134 ] 00:21:16.134 }' 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.134 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.393 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.393 "name": "raid_bdev1", 00:21:16.393 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:16.393 "strip_size_kb": 0, 00:21:16.393 "state": "online", 00:21:16.393 "raid_level": "raid1", 00:21:16.394 "superblock": true, 00:21:16.394 "num_base_bdevs": 2, 00:21:16.394 "num_base_bdevs_discovered": 2, 00:21:16.394 "num_base_bdevs_operational": 2, 00:21:16.394 "base_bdevs_list": [ 00:21:16.394 { 00:21:16.394 "name": "spare", 00:21:16.394 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:16.394 "is_configured": true, 00:21:16.394 "data_offset": 256, 00:21:16.394 "data_size": 7936 00:21:16.394 }, 00:21:16.394 { 00:21:16.394 "name": "BaseBdev2", 00:21:16.394 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:16.394 "is_configured": true, 00:21:16.394 "data_offset": 256, 00:21:16.394 "data_size": 7936 00:21:16.394 } 00:21:16.394 ] 00:21:16.394 }' 00:21:16.394 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:16.653 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.654 [2024-11-20 07:17:58.759915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.654 "name": "raid_bdev1", 00:21:16.654 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:16.654 "strip_size_kb": 0, 00:21:16.654 "state": "online", 00:21:16.654 "raid_level": "raid1", 00:21:16.654 "superblock": true, 00:21:16.654 "num_base_bdevs": 2, 00:21:16.654 "num_base_bdevs_discovered": 1, 00:21:16.654 "num_base_bdevs_operational": 1, 00:21:16.654 "base_bdevs_list": [ 00:21:16.654 { 00:21:16.654 "name": null, 00:21:16.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.654 "is_configured": false, 00:21:16.654 "data_offset": 0, 00:21:16.654 "data_size": 7936 00:21:16.654 }, 00:21:16.654 { 00:21:16.654 "name": "BaseBdev2", 00:21:16.654 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:16.654 "is_configured": true, 00:21:16.654 "data_offset": 256, 00:21:16.654 "data_size": 7936 00:21:16.654 } 00:21:16.654 ] 00:21:16.654 }' 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.654 07:17:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.222 07:17:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:17.222 07:17:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.222 07:17:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.222 [2024-11-20 07:17:59.223180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.222 [2024-11-20 07:17:59.223450] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:17.222 [2024-11-20 07:17:59.223522] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:17.222 [2024-11-20 07:17:59.223588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.222 [2024-11-20 07:17:59.241125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:17.222 07:17:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.222 07:17:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:17.222 [2024-11-20 07:17:59.243211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.163 "name": "raid_bdev1", 00:21:18.163 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:18.163 "strip_size_kb": 0, 00:21:18.163 "state": "online", 00:21:18.163 "raid_level": "raid1", 00:21:18.163 "superblock": true, 00:21:18.163 "num_base_bdevs": 2, 00:21:18.163 "num_base_bdevs_discovered": 2, 00:21:18.163 "num_base_bdevs_operational": 2, 00:21:18.163 "process": { 00:21:18.163 "type": "rebuild", 00:21:18.163 "target": "spare", 00:21:18.163 "progress": { 00:21:18.163 "blocks": 2560, 00:21:18.163 "percent": 32 00:21:18.163 } 00:21:18.163 }, 00:21:18.163 "base_bdevs_list": [ 00:21:18.163 { 00:21:18.163 "name": "spare", 00:21:18.163 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:18.163 "is_configured": true, 00:21:18.163 "data_offset": 256, 00:21:18.163 "data_size": 7936 00:21:18.163 }, 00:21:18.163 { 00:21:18.163 "name": "BaseBdev2", 00:21:18.163 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:18.163 "is_configured": true, 00:21:18.163 "data_offset": 256, 00:21:18.163 "data_size": 7936 00:21:18.163 } 00:21:18.163 ] 00:21:18.163 }' 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.163 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.163 [2024-11-20 07:18:00.406631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.427 [2024-11-20 07:18:00.448737] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:18.427 [2024-11-20 07:18:00.448892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.427 [2024-11-20 07:18:00.448933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.427 [2024-11-20 07:18:00.448960] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.427 "name": "raid_bdev1", 00:21:18.427 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:18.427 "strip_size_kb": 0, 00:21:18.427 "state": "online", 00:21:18.427 "raid_level": "raid1", 00:21:18.427 "superblock": true, 00:21:18.427 "num_base_bdevs": 2, 00:21:18.427 "num_base_bdevs_discovered": 1, 00:21:18.427 "num_base_bdevs_operational": 1, 00:21:18.427 "base_bdevs_list": [ 00:21:18.427 { 00:21:18.427 "name": null, 00:21:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.427 "is_configured": false, 00:21:18.427 "data_offset": 0, 00:21:18.427 "data_size": 7936 00:21:18.427 }, 00:21:18.427 { 00:21:18.427 "name": "BaseBdev2", 00:21:18.427 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:18.427 "is_configured": true, 00:21:18.427 "data_offset": 256, 00:21:18.427 "data_size": 7936 00:21:18.427 } 00:21:18.427 ] 00:21:18.427 }' 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.427 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.996 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:18.996 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.996 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.996 [2024-11-20 07:18:00.966257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.996 [2024-11-20 07:18:00.966400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.996 [2024-11-20 07:18:00.966447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:18.996 [2024-11-20 07:18:00.966484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.996 [2024-11-20 07:18:00.967021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.996 [2024-11-20 07:18:00.967089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.996 [2024-11-20 07:18:00.967236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:18.996 [2024-11-20 07:18:00.967284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:18.996 [2024-11-20 07:18:00.967344] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:18.996 [2024-11-20 07:18:00.967406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.996 [2024-11-20 07:18:00.984993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:18.996 spare 00:21:18.996 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.996 07:18:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:18.996 [2024-11-20 07:18:00.987023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.935 07:18:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.935 "name": "raid_bdev1", 00:21:19.935 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:19.935 "strip_size_kb": 0, 00:21:19.935 "state": "online", 00:21:19.935 "raid_level": "raid1", 00:21:19.935 "superblock": true, 00:21:19.935 "num_base_bdevs": 2, 00:21:19.935 "num_base_bdevs_discovered": 2, 00:21:19.935 "num_base_bdevs_operational": 2, 00:21:19.935 "process": { 00:21:19.935 "type": "rebuild", 00:21:19.935 "target": "spare", 00:21:19.935 "progress": { 00:21:19.935 "blocks": 2560, 00:21:19.935 "percent": 32 00:21:19.935 } 00:21:19.935 }, 00:21:19.935 "base_bdevs_list": [ 00:21:19.935 { 00:21:19.935 "name": "spare", 00:21:19.935 "uuid": "261d58c3-f3f6-509e-81e5-480d10df7ffd", 00:21:19.935 "is_configured": true, 00:21:19.935 "data_offset": 256, 00:21:19.935 "data_size": 7936 00:21:19.935 }, 00:21:19.935 { 00:21:19.935 "name": "BaseBdev2", 00:21:19.935 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:19.935 "is_configured": true, 00:21:19.935 "data_offset": 256, 00:21:19.935 "data_size": 7936 00:21:19.935 } 00:21:19.935 ] 00:21:19.935 }' 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.935 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.935 [2024-11-20 07:18:02.110675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.935 [2024-11-20 07:18:02.192942] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:19.935 [2024-11-20 07:18:02.193016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.935 [2024-11-20 07:18:02.193037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.935 [2024-11-20 07:18:02.193045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.194 "name": "raid_bdev1", 00:21:20.194 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:20.194 "strip_size_kb": 0, 00:21:20.194 "state": "online", 00:21:20.194 "raid_level": "raid1", 00:21:20.194 "superblock": true, 00:21:20.194 "num_base_bdevs": 2, 00:21:20.194 "num_base_bdevs_discovered": 1, 00:21:20.194 "num_base_bdevs_operational": 1, 00:21:20.194 "base_bdevs_list": [ 00:21:20.194 { 00:21:20.194 "name": null, 00:21:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.194 "is_configured": false, 00:21:20.194 "data_offset": 0, 00:21:20.194 "data_size": 7936 00:21:20.194 }, 00:21:20.194 { 00:21:20.194 "name": "BaseBdev2", 00:21:20.194 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:20.194 "is_configured": true, 00:21:20.194 "data_offset": 256, 00:21:20.194 "data_size": 7936 00:21:20.194 } 00:21:20.194 ] 00:21:20.194 }' 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.194 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.453 "name": "raid_bdev1", 00:21:20.453 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:20.453 "strip_size_kb": 0, 00:21:20.453 "state": "online", 00:21:20.453 "raid_level": "raid1", 00:21:20.453 "superblock": true, 00:21:20.453 "num_base_bdevs": 2, 00:21:20.453 "num_base_bdevs_discovered": 1, 00:21:20.453 "num_base_bdevs_operational": 1, 00:21:20.453 "base_bdevs_list": [ 00:21:20.453 { 00:21:20.453 "name": null, 00:21:20.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.453 "is_configured": false, 00:21:20.453 "data_offset": 0, 00:21:20.453 "data_size": 7936 00:21:20.453 }, 00:21:20.453 { 00:21:20.453 "name": "BaseBdev2", 00:21:20.453 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:20.453 "is_configured": true, 00:21:20.453 "data_offset": 256, 00:21:20.453 "data_size": 7936 00:21:20.453 } 00:21:20.453 ] 00:21:20.453 }' 00:21:20.453 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.712 [2024-11-20 07:18:02.787003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:20.712 [2024-11-20 07:18:02.787060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.712 [2024-11-20 07:18:02.787084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:20.712 [2024-11-20 07:18:02.787101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.712 [2024-11-20 07:18:02.787539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.712 [2024-11-20 07:18:02.787561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:20.712 [2024-11-20 07:18:02.787646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:20.712 [2024-11-20 07:18:02.787666] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.712 [2024-11-20 07:18:02.787676] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:20.712 [2024-11-20 07:18:02.787702] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:20.712 BaseBdev1 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.712 07:18:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.649 "name": "raid_bdev1", 00:21:21.649 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:21.649 "strip_size_kb": 0, 00:21:21.649 "state": "online", 00:21:21.649 "raid_level": "raid1", 00:21:21.649 "superblock": true, 00:21:21.649 "num_base_bdevs": 2, 00:21:21.649 "num_base_bdevs_discovered": 1, 00:21:21.649 "num_base_bdevs_operational": 1, 00:21:21.649 "base_bdevs_list": [ 00:21:21.649 { 00:21:21.649 "name": null, 00:21:21.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.649 "is_configured": false, 00:21:21.649 "data_offset": 0, 00:21:21.649 "data_size": 7936 00:21:21.649 }, 00:21:21.649 { 00:21:21.649 "name": "BaseBdev2", 00:21:21.649 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:21.649 "is_configured": true, 00:21:21.649 "data_offset": 256, 00:21:21.649 "data_size": 7936 00:21:21.649 } 00:21:21.649 ] 00:21:21.649 }' 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.649 07:18:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.221 "name": "raid_bdev1", 00:21:22.221 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:22.221 "strip_size_kb": 0, 00:21:22.221 "state": "online", 00:21:22.221 "raid_level": "raid1", 00:21:22.221 "superblock": true, 00:21:22.221 "num_base_bdevs": 2, 00:21:22.221 "num_base_bdevs_discovered": 1, 00:21:22.221 "num_base_bdevs_operational": 1, 00:21:22.221 "base_bdevs_list": [ 00:21:22.221 { 00:21:22.221 "name": null, 00:21:22.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.221 "is_configured": false, 00:21:22.221 "data_offset": 0, 00:21:22.221 "data_size": 7936 00:21:22.221 }, 00:21:22.221 { 00:21:22.221 "name": "BaseBdev2", 00:21:22.221 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:22.221 "is_configured": true, 00:21:22.221 "data_offset": 256, 00:21:22.221 "data_size": 7936 00:21:22.221 } 00:21:22.221 ] 00:21:22.221 }' 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.221 [2024-11-20 07:18:04.384422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.221 [2024-11-20 07:18:04.384655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:22.221 [2024-11-20 07:18:04.384722] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:22.221 request: 00:21:22.221 { 00:21:22.221 "base_bdev": "BaseBdev1", 00:21:22.221 "raid_bdev": "raid_bdev1", 00:21:22.221 "method": "bdev_raid_add_base_bdev", 00:21:22.221 "req_id": 1 00:21:22.221 } 00:21:22.221 Got JSON-RPC error response 00:21:22.221 response: 00:21:22.221 { 00:21:22.221 "code": -22, 00:21:22.221 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:22.221 } 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.221 07:18:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.166 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.424 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.424 "name": "raid_bdev1", 00:21:23.424 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:23.424 "strip_size_kb": 0, 00:21:23.424 "state": "online", 00:21:23.424 "raid_level": "raid1", 00:21:23.424 "superblock": true, 00:21:23.424 "num_base_bdevs": 2, 00:21:23.424 "num_base_bdevs_discovered": 1, 00:21:23.424 "num_base_bdevs_operational": 1, 00:21:23.424 "base_bdevs_list": [ 00:21:23.424 { 00:21:23.424 "name": null, 00:21:23.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.424 "is_configured": false, 00:21:23.424 "data_offset": 0, 00:21:23.424 "data_size": 7936 00:21:23.424 }, 00:21:23.424 { 00:21:23.424 "name": "BaseBdev2", 00:21:23.424 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:23.424 "is_configured": true, 00:21:23.424 "data_offset": 256, 00:21:23.424 "data_size": 7936 00:21:23.424 } 00:21:23.424 ] 00:21:23.424 }' 00:21:23.424 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.424 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.683 "name": "raid_bdev1", 00:21:23.683 "uuid": "f760fa9b-7edf-4870-9be4-b13c26da897d", 00:21:23.683 "strip_size_kb": 0, 00:21:23.683 "state": "online", 00:21:23.683 "raid_level": "raid1", 00:21:23.683 "superblock": true, 00:21:23.683 "num_base_bdevs": 2, 00:21:23.683 "num_base_bdevs_discovered": 1, 00:21:23.683 "num_base_bdevs_operational": 1, 00:21:23.683 "base_bdevs_list": [ 00:21:23.683 { 00:21:23.683 "name": null, 00:21:23.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.683 "is_configured": false, 00:21:23.683 "data_offset": 0, 00:21:23.683 "data_size": 7936 00:21:23.683 }, 00:21:23.683 { 00:21:23.683 "name": "BaseBdev2", 00:21:23.683 "uuid": "b98e4f85-8c6b-5939-83a2-b688b4caf663", 00:21:23.683 "is_configured": true, 00:21:23.683 "data_offset": 256, 00:21:23.683 "data_size": 7936 00:21:23.683 } 00:21:23.683 ] 00:21:23.683 }' 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.683 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87028 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87028 ']' 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87028 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.942 07:18:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87028 00:21:23.942 07:18:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.942 07:18:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.942 killing process with pid 87028 00:21:23.942 07:18:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87028' 00:21:23.942 07:18:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87028 00:21:23.942 Received shutdown signal, test time was about 60.000000 seconds 00:21:23.942 00:21:23.942 Latency(us) 00:21:23.942 [2024-11-20T07:18:06.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.942 [2024-11-20T07:18:06.207Z] =================================================================================================================== 00:21:23.942 [2024-11-20T07:18:06.207Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.942 [2024-11-20 07:18:06.029592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.942 07:18:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87028 00:21:23.942 [2024-11-20 07:18:06.029742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.942 [2024-11-20 07:18:06.029803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.942 [2024-11-20 07:18:06.029817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:24.201 [2024-11-20 07:18:06.349314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:25.579 07:18:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:25.579 00:21:25.579 real 0m20.166s 00:21:25.579 user 0m26.233s 00:21:25.579 sys 0m2.727s 00:21:25.579 ************************************ 00:21:25.579 END TEST raid_rebuild_test_sb_4k 00:21:25.579 ************************************ 00:21:25.579 07:18:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.579 07:18:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:25.579 07:18:07 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:25.579 07:18:07 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:25.579 07:18:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:25.579 07:18:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.579 07:18:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:25.579 ************************************ 00:21:25.579 START TEST raid_state_function_test_sb_md_separate 00:21:25.579 ************************************ 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87719 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:25.579 Process raid pid: 87719 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87719' 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87719 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87719 ']' 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.579 07:18:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.579 [2024-11-20 07:18:07.707503] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:25.579 [2024-11-20 07:18:07.708129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.838 [2024-11-20 07:18:07.883196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.838 [2024-11-20 07:18:08.001780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.097 [2024-11-20 07:18:08.213159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:26.097 [2024-11-20 07:18:08.213287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.356 [2024-11-20 07:18:08.559636] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:26.356 [2024-11-20 07:18:08.559686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:26.356 [2024-11-20 07:18:08.559697] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.356 [2024-11-20 07:18:08.559707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.356 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.357 "name": "Existed_Raid", 00:21:26.357 "uuid": "5945ba39-2be2-46d6-a466-9e2f6601e9d6", 00:21:26.357 "strip_size_kb": 0, 00:21:26.357 "state": "configuring", 00:21:26.357 "raid_level": "raid1", 00:21:26.357 "superblock": true, 00:21:26.357 "num_base_bdevs": 2, 00:21:26.357 "num_base_bdevs_discovered": 0, 00:21:26.357 "num_base_bdevs_operational": 2, 00:21:26.357 "base_bdevs_list": [ 00:21:26.357 { 00:21:26.357 "name": "BaseBdev1", 00:21:26.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.357 "is_configured": false, 00:21:26.357 "data_offset": 0, 00:21:26.357 "data_size": 0 00:21:26.357 }, 00:21:26.357 { 00:21:26.357 "name": "BaseBdev2", 00:21:26.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.357 "is_configured": false, 00:21:26.357 "data_offset": 0, 00:21:26.357 "data_size": 0 00:21:26.357 } 00:21:26.357 ] 00:21:26.357 }' 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.357 07:18:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 [2024-11-20 07:18:09.050741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.922 [2024-11-20 07:18:09.050843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 [2024-11-20 07:18:09.062747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:26.922 [2024-11-20 07:18:09.062835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:26.922 [2024-11-20 07:18:09.062869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.922 [2024-11-20 07:18:09.062899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 [2024-11-20 07:18:09.115917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.922 BaseBdev1 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.922 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.922 [ 00:21:26.922 { 00:21:26.922 "name": "BaseBdev1", 00:21:26.922 "aliases": [ 00:21:26.922 "26b7a479-55b4-4c6f-b9a5-5941c26ede42" 00:21:26.922 ], 00:21:26.922 "product_name": "Malloc disk", 00:21:26.922 "block_size": 4096, 00:21:26.922 "num_blocks": 8192, 00:21:26.922 "uuid": "26b7a479-55b4-4c6f-b9a5-5941c26ede42", 00:21:26.922 "md_size": 32, 00:21:26.923 "md_interleave": false, 00:21:26.923 "dif_type": 0, 00:21:26.923 "assigned_rate_limits": { 00:21:26.923 "rw_ios_per_sec": 0, 00:21:26.923 "rw_mbytes_per_sec": 0, 00:21:26.923 "r_mbytes_per_sec": 0, 00:21:26.923 "w_mbytes_per_sec": 0 00:21:26.923 }, 00:21:26.923 "claimed": true, 00:21:26.923 "claim_type": "exclusive_write", 00:21:26.923 "zoned": false, 00:21:26.923 "supported_io_types": { 00:21:26.923 "read": true, 00:21:26.923 "write": true, 00:21:26.923 "unmap": true, 00:21:26.923 "flush": true, 00:21:26.923 "reset": true, 00:21:26.923 "nvme_admin": false, 00:21:26.923 "nvme_io": false, 00:21:26.923 "nvme_io_md": false, 00:21:26.923 "write_zeroes": true, 00:21:26.923 "zcopy": true, 00:21:26.923 "get_zone_info": false, 00:21:26.923 "zone_management": false, 00:21:26.923 "zone_append": false, 00:21:26.923 "compare": false, 00:21:26.923 "compare_and_write": false, 00:21:26.923 "abort": true, 00:21:26.923 "seek_hole": false, 00:21:26.923 "seek_data": false, 00:21:26.923 "copy": true, 00:21:26.923 "nvme_iov_md": false 00:21:26.923 }, 00:21:26.923 "memory_domains": [ 00:21:26.923 { 00:21:26.923 "dma_device_id": "system", 00:21:26.923 "dma_device_type": 1 00:21:26.923 }, 00:21:26.923 { 00:21:26.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.923 "dma_device_type": 2 00:21:26.923 } 00:21:26.923 ], 00:21:26.923 "driver_specific": {} 00:21:26.923 } 00:21:26.923 ] 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.923 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.182 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.182 "name": "Existed_Raid", 00:21:27.182 "uuid": "4002f334-7a46-452f-8a0d-3f9c08999d55", 00:21:27.182 "strip_size_kb": 0, 00:21:27.182 "state": "configuring", 00:21:27.182 "raid_level": "raid1", 00:21:27.182 "superblock": true, 00:21:27.182 "num_base_bdevs": 2, 00:21:27.182 "num_base_bdevs_discovered": 1, 00:21:27.182 "num_base_bdevs_operational": 2, 00:21:27.183 "base_bdevs_list": [ 00:21:27.183 { 00:21:27.183 "name": "BaseBdev1", 00:21:27.183 "uuid": "26b7a479-55b4-4c6f-b9a5-5941c26ede42", 00:21:27.183 "is_configured": true, 00:21:27.183 "data_offset": 256, 00:21:27.183 "data_size": 7936 00:21:27.183 }, 00:21:27.183 { 00:21:27.183 "name": "BaseBdev2", 00:21:27.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.183 "is_configured": false, 00:21:27.183 "data_offset": 0, 00:21:27.183 "data_size": 0 00:21:27.183 } 00:21:27.183 ] 00:21:27.183 }' 00:21:27.183 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.183 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.445 [2024-11-20 07:18:09.587211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.445 [2024-11-20 07:18:09.587321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.445 [2024-11-20 07:18:09.599247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.445 [2024-11-20 07:18:09.601321] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.445 [2024-11-20 07:18:09.601385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.445 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.445 "name": "Existed_Raid", 00:21:27.445 "uuid": "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b", 00:21:27.445 "strip_size_kb": 0, 00:21:27.446 "state": "configuring", 00:21:27.446 "raid_level": "raid1", 00:21:27.446 "superblock": true, 00:21:27.446 "num_base_bdevs": 2, 00:21:27.446 "num_base_bdevs_discovered": 1, 00:21:27.446 "num_base_bdevs_operational": 2, 00:21:27.446 "base_bdevs_list": [ 00:21:27.446 { 00:21:27.446 "name": "BaseBdev1", 00:21:27.446 "uuid": "26b7a479-55b4-4c6f-b9a5-5941c26ede42", 00:21:27.446 "is_configured": true, 00:21:27.446 "data_offset": 256, 00:21:27.446 "data_size": 7936 00:21:27.446 }, 00:21:27.446 { 00:21:27.446 "name": "BaseBdev2", 00:21:27.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.446 "is_configured": false, 00:21:27.446 "data_offset": 0, 00:21:27.446 "data_size": 0 00:21:27.446 } 00:21:27.446 ] 00:21:27.446 }' 00:21:27.446 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.446 07:18:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 [2024-11-20 07:18:10.138551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.013 [2024-11-20 07:18:10.138952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:28.013 [2024-11-20 07:18:10.139015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:28.013 [2024-11-20 07:18:10.139141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:28.013 [2024-11-20 07:18:10.139340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:28.013 [2024-11-20 07:18:10.139402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:28.013 [2024-11-20 07:18:10.139568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.013 BaseBdev2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 [ 00:21:28.013 { 00:21:28.013 "name": "BaseBdev2", 00:21:28.013 "aliases": [ 00:21:28.013 "5c4dc79e-ad7d-435c-8674-3479e22a5cf5" 00:21:28.013 ], 00:21:28.013 "product_name": "Malloc disk", 00:21:28.013 "block_size": 4096, 00:21:28.013 "num_blocks": 8192, 00:21:28.013 "uuid": "5c4dc79e-ad7d-435c-8674-3479e22a5cf5", 00:21:28.013 "md_size": 32, 00:21:28.013 "md_interleave": false, 00:21:28.013 "dif_type": 0, 00:21:28.013 "assigned_rate_limits": { 00:21:28.013 "rw_ios_per_sec": 0, 00:21:28.013 "rw_mbytes_per_sec": 0, 00:21:28.013 "r_mbytes_per_sec": 0, 00:21:28.013 "w_mbytes_per_sec": 0 00:21:28.013 }, 00:21:28.013 "claimed": true, 00:21:28.013 "claim_type": "exclusive_write", 00:21:28.013 "zoned": false, 00:21:28.013 "supported_io_types": { 00:21:28.013 "read": true, 00:21:28.013 "write": true, 00:21:28.013 "unmap": true, 00:21:28.013 "flush": true, 00:21:28.013 "reset": true, 00:21:28.013 "nvme_admin": false, 00:21:28.013 "nvme_io": false, 00:21:28.013 "nvme_io_md": false, 00:21:28.013 "write_zeroes": true, 00:21:28.013 "zcopy": true, 00:21:28.013 "get_zone_info": false, 00:21:28.013 "zone_management": false, 00:21:28.013 "zone_append": false, 00:21:28.013 "compare": false, 00:21:28.013 "compare_and_write": false, 00:21:28.013 "abort": true, 00:21:28.013 "seek_hole": false, 00:21:28.013 "seek_data": false, 00:21:28.013 "copy": true, 00:21:28.013 "nvme_iov_md": false 00:21:28.013 }, 00:21:28.013 "memory_domains": [ 00:21:28.013 { 00:21:28.013 "dma_device_id": "system", 00:21:28.013 "dma_device_type": 1 00:21:28.013 }, 00:21:28.013 { 00:21:28.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.013 "dma_device_type": 2 00:21:28.013 } 00:21:28.013 ], 00:21:28.013 "driver_specific": {} 00:21:28.013 } 00:21:28.013 ] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.013 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.013 "name": "Existed_Raid", 00:21:28.013 "uuid": "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b", 00:21:28.013 "strip_size_kb": 0, 00:21:28.013 "state": "online", 00:21:28.013 "raid_level": "raid1", 00:21:28.013 "superblock": true, 00:21:28.013 "num_base_bdevs": 2, 00:21:28.013 "num_base_bdevs_discovered": 2, 00:21:28.013 "num_base_bdevs_operational": 2, 00:21:28.014 "base_bdevs_list": [ 00:21:28.014 { 00:21:28.014 "name": "BaseBdev1", 00:21:28.014 "uuid": "26b7a479-55b4-4c6f-b9a5-5941c26ede42", 00:21:28.014 "is_configured": true, 00:21:28.014 "data_offset": 256, 00:21:28.014 "data_size": 7936 00:21:28.014 }, 00:21:28.014 { 00:21:28.014 "name": "BaseBdev2", 00:21:28.014 "uuid": "5c4dc79e-ad7d-435c-8674-3479e22a5cf5", 00:21:28.014 "is_configured": true, 00:21:28.014 "data_offset": 256, 00:21:28.014 "data_size": 7936 00:21:28.014 } 00:21:28.014 ] 00:21:28.014 }' 00:21:28.014 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.014 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.581 [2024-11-20 07:18:10.642129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.581 "name": "Existed_Raid", 00:21:28.581 "aliases": [ 00:21:28.581 "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b" 00:21:28.581 ], 00:21:28.581 "product_name": "Raid Volume", 00:21:28.581 "block_size": 4096, 00:21:28.581 "num_blocks": 7936, 00:21:28.581 "uuid": "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b", 00:21:28.581 "md_size": 32, 00:21:28.581 "md_interleave": false, 00:21:28.581 "dif_type": 0, 00:21:28.581 "assigned_rate_limits": { 00:21:28.581 "rw_ios_per_sec": 0, 00:21:28.581 "rw_mbytes_per_sec": 0, 00:21:28.581 "r_mbytes_per_sec": 0, 00:21:28.581 "w_mbytes_per_sec": 0 00:21:28.581 }, 00:21:28.581 "claimed": false, 00:21:28.581 "zoned": false, 00:21:28.581 "supported_io_types": { 00:21:28.581 "read": true, 00:21:28.581 "write": true, 00:21:28.581 "unmap": false, 00:21:28.581 "flush": false, 00:21:28.581 "reset": true, 00:21:28.581 "nvme_admin": false, 00:21:28.581 "nvme_io": false, 00:21:28.581 "nvme_io_md": false, 00:21:28.581 "write_zeroes": true, 00:21:28.581 "zcopy": false, 00:21:28.581 "get_zone_info": false, 00:21:28.581 "zone_management": false, 00:21:28.581 "zone_append": false, 00:21:28.581 "compare": false, 00:21:28.581 "compare_and_write": false, 00:21:28.581 "abort": false, 00:21:28.581 "seek_hole": false, 00:21:28.581 "seek_data": false, 00:21:28.581 "copy": false, 00:21:28.581 "nvme_iov_md": false 00:21:28.581 }, 00:21:28.581 "memory_domains": [ 00:21:28.581 { 00:21:28.581 "dma_device_id": "system", 00:21:28.581 "dma_device_type": 1 00:21:28.581 }, 00:21:28.581 { 00:21:28.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.581 "dma_device_type": 2 00:21:28.581 }, 00:21:28.581 { 00:21:28.581 "dma_device_id": "system", 00:21:28.581 "dma_device_type": 1 00:21:28.581 }, 00:21:28.581 { 00:21:28.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.581 "dma_device_type": 2 00:21:28.581 } 00:21:28.581 ], 00:21:28.581 "driver_specific": { 00:21:28.581 "raid": { 00:21:28.581 "uuid": "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b", 00:21:28.581 "strip_size_kb": 0, 00:21:28.581 "state": "online", 00:21:28.581 "raid_level": "raid1", 00:21:28.581 "superblock": true, 00:21:28.581 "num_base_bdevs": 2, 00:21:28.581 "num_base_bdevs_discovered": 2, 00:21:28.581 "num_base_bdevs_operational": 2, 00:21:28.581 "base_bdevs_list": [ 00:21:28.581 { 00:21:28.581 "name": "BaseBdev1", 00:21:28.581 "uuid": "26b7a479-55b4-4c6f-b9a5-5941c26ede42", 00:21:28.581 "is_configured": true, 00:21:28.581 "data_offset": 256, 00:21:28.581 "data_size": 7936 00:21:28.581 }, 00:21:28.581 { 00:21:28.581 "name": "BaseBdev2", 00:21:28.581 "uuid": "5c4dc79e-ad7d-435c-8674-3479e22a5cf5", 00:21:28.581 "is_configured": true, 00:21:28.581 "data_offset": 256, 00:21:28.581 "data_size": 7936 00:21:28.581 } 00:21:28.581 ] 00:21:28.581 } 00:21:28.581 } 00:21:28.581 }' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:28.581 BaseBdev2' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.581 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.840 07:18:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.840 [2024-11-20 07:18:10.917329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.840 "name": "Existed_Raid", 00:21:28.840 "uuid": "fcbe9adb-535a-4d2d-b4d9-c7395716bf9b", 00:21:28.840 "strip_size_kb": 0, 00:21:28.840 "state": "online", 00:21:28.840 "raid_level": "raid1", 00:21:28.840 "superblock": true, 00:21:28.840 "num_base_bdevs": 2, 00:21:28.840 "num_base_bdevs_discovered": 1, 00:21:28.840 "num_base_bdevs_operational": 1, 00:21:28.840 "base_bdevs_list": [ 00:21:28.840 { 00:21:28.840 "name": null, 00:21:28.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.840 "is_configured": false, 00:21:28.840 "data_offset": 0, 00:21:28.840 "data_size": 7936 00:21:28.840 }, 00:21:28.840 { 00:21:28.840 "name": "BaseBdev2", 00:21:28.840 "uuid": "5c4dc79e-ad7d-435c-8674-3479e22a5cf5", 00:21:28.840 "is_configured": true, 00:21:28.840 "data_offset": 256, 00:21:28.840 "data_size": 7936 00:21:28.840 } 00:21:28.840 ] 00:21:28.840 }' 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.840 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.408 [2024-11-20 07:18:11.551392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:29.408 [2024-11-20 07:18:11.551576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.408 [2024-11-20 07:18:11.660661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.408 [2024-11-20 07:18:11.660803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.408 [2024-11-20 07:18:11.660852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.408 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87719 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87719 ']' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87719 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87719 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.667 killing process with pid 87719 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87719' 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87719 00:21:29.667 [2024-11-20 07:18:11.746189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.667 07:18:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87719 00:21:29.667 [2024-11-20 07:18:11.765227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.046 07:18:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:31.046 00:21:31.046 real 0m5.361s 00:21:31.046 user 0m7.738s 00:21:31.046 sys 0m0.874s 00:21:31.046 07:18:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.046 07:18:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.046 ************************************ 00:21:31.046 END TEST raid_state_function_test_sb_md_separate 00:21:31.046 ************************************ 00:21:31.046 07:18:13 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:31.046 07:18:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:31.046 07:18:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.046 07:18:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.046 ************************************ 00:21:31.046 START TEST raid_superblock_test_md_separate 00:21:31.046 ************************************ 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87974 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87974 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87974 ']' 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.046 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.046 [2024-11-20 07:18:13.136546] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:31.046 [2024-11-20 07:18:13.136688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87974 ] 00:21:31.046 [2024-11-20 07:18:13.293814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.306 [2024-11-20 07:18:13.414692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.565 [2024-11-20 07:18:13.626220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.565 [2024-11-20 07:18:13.626255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.825 07:18:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.825 malloc1 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.825 [2024-11-20 07:18:14.046816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:31.825 [2024-11-20 07:18:14.046924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.825 [2024-11-20 07:18:14.046970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:31.825 [2024-11-20 07:18:14.047003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.825 [2024-11-20 07:18:14.049182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.825 [2024-11-20 07:18:14.049265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:31.825 pt1 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.825 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.085 malloc2 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.085 [2024-11-20 07:18:14.112083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:32.085 [2024-11-20 07:18:14.112203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.085 [2024-11-20 07:18:14.112233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:32.085 [2024-11-20 07:18:14.112244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.085 [2024-11-20 07:18:14.114481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.085 [2024-11-20 07:18:14.114522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:32.085 pt2 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.085 [2024-11-20 07:18:14.124093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:32.085 [2024-11-20 07:18:14.126182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:32.085 [2024-11-20 07:18:14.126473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:32.085 [2024-11-20 07:18:14.126496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:32.085 [2024-11-20 07:18:14.126600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:32.085 [2024-11-20 07:18:14.126743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:32.085 [2024-11-20 07:18:14.126757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:32.085 [2024-11-20 07:18:14.126879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.085 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.085 "name": "raid_bdev1", 00:21:32.086 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:32.086 "strip_size_kb": 0, 00:21:32.086 "state": "online", 00:21:32.086 "raid_level": "raid1", 00:21:32.086 "superblock": true, 00:21:32.086 "num_base_bdevs": 2, 00:21:32.086 "num_base_bdevs_discovered": 2, 00:21:32.086 "num_base_bdevs_operational": 2, 00:21:32.086 "base_bdevs_list": [ 00:21:32.086 { 00:21:32.086 "name": "pt1", 00:21:32.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.086 "is_configured": true, 00:21:32.086 "data_offset": 256, 00:21:32.086 "data_size": 7936 00:21:32.086 }, 00:21:32.086 { 00:21:32.086 "name": "pt2", 00:21:32.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.086 "is_configured": true, 00:21:32.086 "data_offset": 256, 00:21:32.086 "data_size": 7936 00:21:32.086 } 00:21:32.086 ] 00:21:32.086 }' 00:21:32.086 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.086 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.346 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.346 [2024-11-20 07:18:14.599641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.606 "name": "raid_bdev1", 00:21:32.606 "aliases": [ 00:21:32.606 "0e609552-1a1e-4618-8128-dc587af2d486" 00:21:32.606 ], 00:21:32.606 "product_name": "Raid Volume", 00:21:32.606 "block_size": 4096, 00:21:32.606 "num_blocks": 7936, 00:21:32.606 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:32.606 "md_size": 32, 00:21:32.606 "md_interleave": false, 00:21:32.606 "dif_type": 0, 00:21:32.606 "assigned_rate_limits": { 00:21:32.606 "rw_ios_per_sec": 0, 00:21:32.606 "rw_mbytes_per_sec": 0, 00:21:32.606 "r_mbytes_per_sec": 0, 00:21:32.606 "w_mbytes_per_sec": 0 00:21:32.606 }, 00:21:32.606 "claimed": false, 00:21:32.606 "zoned": false, 00:21:32.606 "supported_io_types": { 00:21:32.606 "read": true, 00:21:32.606 "write": true, 00:21:32.606 "unmap": false, 00:21:32.606 "flush": false, 00:21:32.606 "reset": true, 00:21:32.606 "nvme_admin": false, 00:21:32.606 "nvme_io": false, 00:21:32.606 "nvme_io_md": false, 00:21:32.606 "write_zeroes": true, 00:21:32.606 "zcopy": false, 00:21:32.606 "get_zone_info": false, 00:21:32.606 "zone_management": false, 00:21:32.606 "zone_append": false, 00:21:32.606 "compare": false, 00:21:32.606 "compare_and_write": false, 00:21:32.606 "abort": false, 00:21:32.606 "seek_hole": false, 00:21:32.606 "seek_data": false, 00:21:32.606 "copy": false, 00:21:32.606 "nvme_iov_md": false 00:21:32.606 }, 00:21:32.606 "memory_domains": [ 00:21:32.606 { 00:21:32.606 "dma_device_id": "system", 00:21:32.606 "dma_device_type": 1 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.606 "dma_device_type": 2 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "dma_device_id": "system", 00:21:32.606 "dma_device_type": 1 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.606 "dma_device_type": 2 00:21:32.606 } 00:21:32.606 ], 00:21:32.606 "driver_specific": { 00:21:32.606 "raid": { 00:21:32.606 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:32.606 "strip_size_kb": 0, 00:21:32.606 "state": "online", 00:21:32.606 "raid_level": "raid1", 00:21:32.606 "superblock": true, 00:21:32.606 "num_base_bdevs": 2, 00:21:32.606 "num_base_bdevs_discovered": 2, 00:21:32.606 "num_base_bdevs_operational": 2, 00:21:32.606 "base_bdevs_list": [ 00:21:32.606 { 00:21:32.606 "name": "pt1", 00:21:32.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 256, 00:21:32.606 "data_size": 7936 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": "pt2", 00:21:32.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 256, 00:21:32.606 "data_size": 7936 00:21:32.606 } 00:21:32.606 ] 00:21:32.606 } 00:21:32.606 } 00:21:32.606 }' 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:32.606 pt2' 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.606 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.607 [2024-11-20 07:18:14.843188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.607 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e609552-1a1e-4618-8128-dc587af2d486 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0e609552-1a1e-4618-8128-dc587af2d486 ']' 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 [2024-11-20 07:18:14.890785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.867 [2024-11-20 07:18:14.890857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.867 [2024-11-20 07:18:14.891002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.867 [2024-11-20 07:18:14.891103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.867 [2024-11-20 07:18:14.891156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.867 [2024-11-20 07:18:15.034552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:32.867 [2024-11-20 07:18:15.036578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:32.867 [2024-11-20 07:18:15.036659] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:32.867 [2024-11-20 07:18:15.036723] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:32.867 [2024-11-20 07:18:15.036740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.867 [2024-11-20 07:18:15.036751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:32.867 request: 00:21:32.867 { 00:21:32.867 "name": "raid_bdev1", 00:21:32.867 "raid_level": "raid1", 00:21:32.867 "base_bdevs": [ 00:21:32.867 "malloc1", 00:21:32.867 "malloc2" 00:21:32.867 ], 00:21:32.867 "superblock": false, 00:21:32.867 "method": "bdev_raid_create", 00:21:32.867 "req_id": 1 00:21:32.867 } 00:21:32.867 Got JSON-RPC error response 00:21:32.867 response: 00:21:32.867 { 00:21:32.867 "code": -17, 00:21:32.867 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:32.867 } 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:32.867 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.868 [2024-11-20 07:18:15.098444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:32.868 [2024-11-20 07:18:15.098552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.868 [2024-11-20 07:18:15.098591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:32.868 [2024-11-20 07:18:15.098649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.868 [2024-11-20 07:18:15.100905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.868 [2024-11-20 07:18:15.100987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:32.868 [2024-11-20 07:18:15.101108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:32.868 [2024-11-20 07:18:15.101199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:32.868 pt1 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.868 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.127 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.127 "name": "raid_bdev1", 00:21:33.127 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:33.127 "strip_size_kb": 0, 00:21:33.127 "state": "configuring", 00:21:33.127 "raid_level": "raid1", 00:21:33.127 "superblock": true, 00:21:33.127 "num_base_bdevs": 2, 00:21:33.127 "num_base_bdevs_discovered": 1, 00:21:33.127 "num_base_bdevs_operational": 2, 00:21:33.127 "base_bdevs_list": [ 00:21:33.127 { 00:21:33.127 "name": "pt1", 00:21:33.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.127 "is_configured": true, 00:21:33.127 "data_offset": 256, 00:21:33.127 "data_size": 7936 00:21:33.127 }, 00:21:33.127 { 00:21:33.127 "name": null, 00:21:33.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.127 "is_configured": false, 00:21:33.127 "data_offset": 256, 00:21:33.127 "data_size": 7936 00:21:33.127 } 00:21:33.127 ] 00:21:33.127 }' 00:21:33.127 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.127 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.387 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:33.387 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:33.387 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.388 [2024-11-20 07:18:15.589638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:33.388 [2024-11-20 07:18:15.589727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.388 [2024-11-20 07:18:15.589753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:33.388 [2024-11-20 07:18:15.589767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.388 [2024-11-20 07:18:15.590021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.388 [2024-11-20 07:18:15.590053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:33.388 [2024-11-20 07:18:15.590113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:33.388 [2024-11-20 07:18:15.590139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:33.388 [2024-11-20 07:18:15.590284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:33.388 [2024-11-20 07:18:15.590302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:33.388 [2024-11-20 07:18:15.590403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:33.388 [2024-11-20 07:18:15.590536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:33.388 [2024-11-20 07:18:15.590545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:33.388 [2024-11-20 07:18:15.590649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.388 pt2 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.388 "name": "raid_bdev1", 00:21:33.388 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:33.388 "strip_size_kb": 0, 00:21:33.388 "state": "online", 00:21:33.388 "raid_level": "raid1", 00:21:33.388 "superblock": true, 00:21:33.388 "num_base_bdevs": 2, 00:21:33.388 "num_base_bdevs_discovered": 2, 00:21:33.388 "num_base_bdevs_operational": 2, 00:21:33.388 "base_bdevs_list": [ 00:21:33.388 { 00:21:33.388 "name": "pt1", 00:21:33.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.388 "is_configured": true, 00:21:33.388 "data_offset": 256, 00:21:33.388 "data_size": 7936 00:21:33.388 }, 00:21:33.388 { 00:21:33.388 "name": "pt2", 00:21:33.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.388 "is_configured": true, 00:21:33.388 "data_offset": 256, 00:21:33.388 "data_size": 7936 00:21:33.388 } 00:21:33.388 ] 00:21:33.388 }' 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.388 07:18:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.957 [2024-11-20 07:18:16.081181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.957 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.957 "name": "raid_bdev1", 00:21:33.957 "aliases": [ 00:21:33.957 "0e609552-1a1e-4618-8128-dc587af2d486" 00:21:33.957 ], 00:21:33.957 "product_name": "Raid Volume", 00:21:33.957 "block_size": 4096, 00:21:33.957 "num_blocks": 7936, 00:21:33.957 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:33.957 "md_size": 32, 00:21:33.957 "md_interleave": false, 00:21:33.957 "dif_type": 0, 00:21:33.957 "assigned_rate_limits": { 00:21:33.957 "rw_ios_per_sec": 0, 00:21:33.957 "rw_mbytes_per_sec": 0, 00:21:33.957 "r_mbytes_per_sec": 0, 00:21:33.957 "w_mbytes_per_sec": 0 00:21:33.957 }, 00:21:33.957 "claimed": false, 00:21:33.957 "zoned": false, 00:21:33.957 "supported_io_types": { 00:21:33.957 "read": true, 00:21:33.957 "write": true, 00:21:33.957 "unmap": false, 00:21:33.957 "flush": false, 00:21:33.957 "reset": true, 00:21:33.957 "nvme_admin": false, 00:21:33.957 "nvme_io": false, 00:21:33.957 "nvme_io_md": false, 00:21:33.957 "write_zeroes": true, 00:21:33.957 "zcopy": false, 00:21:33.957 "get_zone_info": false, 00:21:33.957 "zone_management": false, 00:21:33.957 "zone_append": false, 00:21:33.957 "compare": false, 00:21:33.957 "compare_and_write": false, 00:21:33.957 "abort": false, 00:21:33.957 "seek_hole": false, 00:21:33.957 "seek_data": false, 00:21:33.957 "copy": false, 00:21:33.957 "nvme_iov_md": false 00:21:33.957 }, 00:21:33.957 "memory_domains": [ 00:21:33.957 { 00:21:33.957 "dma_device_id": "system", 00:21:33.957 "dma_device_type": 1 00:21:33.957 }, 00:21:33.957 { 00:21:33.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.957 "dma_device_type": 2 00:21:33.957 }, 00:21:33.957 { 00:21:33.957 "dma_device_id": "system", 00:21:33.957 "dma_device_type": 1 00:21:33.957 }, 00:21:33.957 { 00:21:33.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.957 "dma_device_type": 2 00:21:33.957 } 00:21:33.957 ], 00:21:33.957 "driver_specific": { 00:21:33.957 "raid": { 00:21:33.957 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:33.957 "strip_size_kb": 0, 00:21:33.957 "state": "online", 00:21:33.957 "raid_level": "raid1", 00:21:33.957 "superblock": true, 00:21:33.957 "num_base_bdevs": 2, 00:21:33.957 "num_base_bdevs_discovered": 2, 00:21:33.957 "num_base_bdevs_operational": 2, 00:21:33.957 "base_bdevs_list": [ 00:21:33.957 { 00:21:33.957 "name": "pt1", 00:21:33.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.958 "is_configured": true, 00:21:33.958 "data_offset": 256, 00:21:33.958 "data_size": 7936 00:21:33.958 }, 00:21:33.958 { 00:21:33.958 "name": "pt2", 00:21:33.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.958 "is_configured": true, 00:21:33.958 "data_offset": 256, 00:21:33.958 "data_size": 7936 00:21:33.958 } 00:21:33.958 ] 00:21:33.958 } 00:21:33.958 } 00:21:33.958 }' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:33.958 pt2' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.958 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.217 [2024-11-20 07:18:16.312827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0e609552-1a1e-4618-8128-dc587af2d486 '!=' 0e609552-1a1e-4618-8128-dc587af2d486 ']' 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.217 [2024-11-20 07:18:16.340517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:34.217 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.218 "name": "raid_bdev1", 00:21:34.218 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:34.218 "strip_size_kb": 0, 00:21:34.218 "state": "online", 00:21:34.218 "raid_level": "raid1", 00:21:34.218 "superblock": true, 00:21:34.218 "num_base_bdevs": 2, 00:21:34.218 "num_base_bdevs_discovered": 1, 00:21:34.218 "num_base_bdevs_operational": 1, 00:21:34.218 "base_bdevs_list": [ 00:21:34.218 { 00:21:34.218 "name": null, 00:21:34.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.218 "is_configured": false, 00:21:34.218 "data_offset": 0, 00:21:34.218 "data_size": 7936 00:21:34.218 }, 00:21:34.218 { 00:21:34.218 "name": "pt2", 00:21:34.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.218 "is_configured": true, 00:21:34.218 "data_offset": 256, 00:21:34.218 "data_size": 7936 00:21:34.218 } 00:21:34.218 ] 00:21:34.218 }' 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.218 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.518 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.518 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.518 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.797 [2024-11-20 07:18:16.775714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.797 [2024-11-20 07:18:16.775806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.797 [2024-11-20 07:18:16.775926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.798 [2024-11-20 07:18:16.776001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.798 [2024-11-20 07:18:16.776057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.798 [2024-11-20 07:18:16.855585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:34.798 [2024-11-20 07:18:16.855710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.798 [2024-11-20 07:18:16.855768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:34.798 [2024-11-20 07:18:16.855807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.798 [2024-11-20 07:18:16.858161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.798 [2024-11-20 07:18:16.858251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:34.798 [2024-11-20 07:18:16.858354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:34.798 [2024-11-20 07:18:16.858443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:34.798 [2024-11-20 07:18:16.858601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:34.798 [2024-11-20 07:18:16.858649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:34.798 [2024-11-20 07:18:16.858759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:34.798 [2024-11-20 07:18:16.858940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:34.798 [2024-11-20 07:18:16.858983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:34.798 [2024-11-20 07:18:16.859152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.798 pt2 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.798 "name": "raid_bdev1", 00:21:34.798 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:34.798 "strip_size_kb": 0, 00:21:34.798 "state": "online", 00:21:34.798 "raid_level": "raid1", 00:21:34.798 "superblock": true, 00:21:34.798 "num_base_bdevs": 2, 00:21:34.798 "num_base_bdevs_discovered": 1, 00:21:34.798 "num_base_bdevs_operational": 1, 00:21:34.798 "base_bdevs_list": [ 00:21:34.798 { 00:21:34.798 "name": null, 00:21:34.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.798 "is_configured": false, 00:21:34.798 "data_offset": 256, 00:21:34.798 "data_size": 7936 00:21:34.798 }, 00:21:34.798 { 00:21:34.798 "name": "pt2", 00:21:34.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.798 "is_configured": true, 00:21:34.798 "data_offset": 256, 00:21:34.798 "data_size": 7936 00:21:34.798 } 00:21:34.798 ] 00:21:34.798 }' 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.798 07:18:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.057 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:35.057 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.057 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.317 [2024-11-20 07:18:17.326796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.317 [2024-11-20 07:18:17.326896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.317 [2024-11-20 07:18:17.327009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.317 [2024-11-20 07:18:17.327115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.317 [2024-11-20 07:18:17.327169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.318 [2024-11-20 07:18:17.386732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:35.318 [2024-11-20 07:18:17.386851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.318 [2024-11-20 07:18:17.386894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:35.318 [2024-11-20 07:18:17.386927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.318 [2024-11-20 07:18:17.389253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.318 [2024-11-20 07:18:17.389331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:35.318 [2024-11-20 07:18:17.389454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:35.318 [2024-11-20 07:18:17.389534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:35.318 [2024-11-20 07:18:17.389732] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:35.318 [2024-11-20 07:18:17.389791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.318 [2024-11-20 07:18:17.389871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:35.318 [2024-11-20 07:18:17.390018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:35.318 [2024-11-20 07:18:17.390133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:35.318 [2024-11-20 07:18:17.390173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:35.318 [2024-11-20 07:18:17.390304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:35.318 [2024-11-20 07:18:17.390476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:35.318 [2024-11-20 07:18:17.390522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:35.318 [2024-11-20 07:18:17.390722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.318 pt1 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.318 "name": "raid_bdev1", 00:21:35.318 "uuid": "0e609552-1a1e-4618-8128-dc587af2d486", 00:21:35.318 "strip_size_kb": 0, 00:21:35.318 "state": "online", 00:21:35.318 "raid_level": "raid1", 00:21:35.318 "superblock": true, 00:21:35.318 "num_base_bdevs": 2, 00:21:35.318 "num_base_bdevs_discovered": 1, 00:21:35.318 "num_base_bdevs_operational": 1, 00:21:35.318 "base_bdevs_list": [ 00:21:35.318 { 00:21:35.318 "name": null, 00:21:35.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.318 "is_configured": false, 00:21:35.318 "data_offset": 256, 00:21:35.318 "data_size": 7936 00:21:35.318 }, 00:21:35.318 { 00:21:35.318 "name": "pt2", 00:21:35.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.318 "is_configured": true, 00:21:35.318 "data_offset": 256, 00:21:35.318 "data_size": 7936 00:21:35.318 } 00:21:35.318 ] 00:21:35.318 }' 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.318 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:35.887 [2024-11-20 07:18:17.906173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0e609552-1a1e-4618-8128-dc587af2d486 '!=' 0e609552-1a1e-4618-8128-dc587af2d486 ']' 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87974 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87974 ']' 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87974 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87974 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.887 killing process with pid 87974 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87974' 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87974 00:21:35.887 [2024-11-20 07:18:17.965765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.887 [2024-11-20 07:18:17.965874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.887 07:18:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87974 00:21:35.887 [2024-11-20 07:18:17.965930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.887 [2024-11-20 07:18:17.965949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:36.147 [2024-11-20 07:18:18.203210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.525 07:18:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:37.525 00:21:37.525 real 0m6.403s 00:21:37.525 user 0m9.658s 00:21:37.525 sys 0m1.133s 00:21:37.525 07:18:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.525 07:18:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 ************************************ 00:21:37.525 END TEST raid_superblock_test_md_separate 00:21:37.525 ************************************ 00:21:37.525 07:18:19 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:37.525 07:18:19 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:37.525 07:18:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:37.525 07:18:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.525 07:18:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 ************************************ 00:21:37.525 START TEST raid_rebuild_test_sb_md_separate 00:21:37.525 ************************************ 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:37.525 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88301 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88301 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88301 ']' 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.526 07:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.526 [2024-11-20 07:18:19.623169] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:37.526 [2024-11-20 07:18:19.623390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88301 ] 00:21:37.526 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:37.526 Zero copy mechanism will not be used. 00:21:37.786 [2024-11-20 07:18:19.804982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.786 [2024-11-20 07:18:19.939217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.047 [2024-11-20 07:18:20.170998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.047 [2024-11-20 07:18:20.171135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.306 BaseBdev1_malloc 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.306 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 [2024-11-20 07:18:20.572569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:38.599 [2024-11-20 07:18:20.572746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.599 [2024-11-20 07:18:20.572824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:38.599 [2024-11-20 07:18:20.572877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.599 [2024-11-20 07:18:20.575476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.599 [2024-11-20 07:18:20.575583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:38.599 BaseBdev1 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 BaseBdev2_malloc 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 [2024-11-20 07:18:20.633624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:38.599 [2024-11-20 07:18:20.633705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.599 [2024-11-20 07:18:20.633729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:38.599 [2024-11-20 07:18:20.633743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.599 [2024-11-20 07:18:20.636007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.599 [2024-11-20 07:18:20.636105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:38.599 BaseBdev2 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 spare_malloc 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 spare_delay 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 [2024-11-20 07:18:20.719230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:38.599 [2024-11-20 07:18:20.719388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.599 [2024-11-20 07:18:20.719424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:38.599 [2024-11-20 07:18:20.719437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.599 [2024-11-20 07:18:20.721694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.599 [2024-11-20 07:18:20.721740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:38.599 spare 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 [2024-11-20 07:18:20.731253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.599 [2024-11-20 07:18:20.733308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.599 [2024-11-20 07:18:20.733540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:38.599 [2024-11-20 07:18:20.733558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.599 [2024-11-20 07:18:20.733658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:38.599 [2024-11-20 07:18:20.733797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:38.599 [2024-11-20 07:18:20.733806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:38.599 [2024-11-20 07:18:20.733947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.599 "name": "raid_bdev1", 00:21:38.599 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:38.599 "strip_size_kb": 0, 00:21:38.599 "state": "online", 00:21:38.599 "raid_level": "raid1", 00:21:38.599 "superblock": true, 00:21:38.599 "num_base_bdevs": 2, 00:21:38.599 "num_base_bdevs_discovered": 2, 00:21:38.599 "num_base_bdevs_operational": 2, 00:21:38.599 "base_bdevs_list": [ 00:21:38.599 { 00:21:38.599 "name": "BaseBdev1", 00:21:38.599 "uuid": "217fc80a-f52e-5977-9d27-150f42efc0cd", 00:21:38.599 "is_configured": true, 00:21:38.599 "data_offset": 256, 00:21:38.599 "data_size": 7936 00:21:38.599 }, 00:21:38.599 { 00:21:38.599 "name": "BaseBdev2", 00:21:38.599 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:38.599 "is_configured": true, 00:21:38.599 "data_offset": 256, 00:21:38.599 "data_size": 7936 00:21:38.599 } 00:21:38.599 ] 00:21:38.599 }' 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.599 07:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 [2024-11-20 07:18:21.234795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.185 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.186 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.186 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:39.186 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.186 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.186 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:39.443 [2024-11-20 07:18:21.545972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:39.443 /dev/nbd0 00:21:39.443 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.443 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.444 1+0 records in 00:21:39.444 1+0 records out 00:21:39.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346164 s, 11.8 MB/s 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:39.444 07:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:40.441 7936+0 records in 00:21:40.441 7936+0 records out 00:21:40.441 32505856 bytes (33 MB, 31 MiB) copied, 0.751786 s, 43.2 MB/s 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:40.441 [2024-11-20 07:18:22.616627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.441 [2024-11-20 07:18:22.636774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.441 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.441 "name": "raid_bdev1", 00:21:40.441 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:40.441 "strip_size_kb": 0, 00:21:40.441 "state": "online", 00:21:40.441 "raid_level": "raid1", 00:21:40.441 "superblock": true, 00:21:40.441 "num_base_bdevs": 2, 00:21:40.441 "num_base_bdevs_discovered": 1, 00:21:40.441 "num_base_bdevs_operational": 1, 00:21:40.441 "base_bdevs_list": [ 00:21:40.441 { 00:21:40.441 "name": null, 00:21:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.441 "is_configured": false, 00:21:40.441 "data_offset": 0, 00:21:40.441 "data_size": 7936 00:21:40.441 }, 00:21:40.441 { 00:21:40.441 "name": "BaseBdev2", 00:21:40.442 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:40.442 "is_configured": true, 00:21:40.442 "data_offset": 256, 00:21:40.442 "data_size": 7936 00:21:40.442 } 00:21:40.442 ] 00:21:40.442 }' 00:21:40.442 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.442 07:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.007 07:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:41.007 07:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.007 07:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.007 [2024-11-20 07:18:23.107964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:41.007 [2024-11-20 07:18:23.124720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:41.007 07:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.007 07:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:41.007 [2024-11-20 07:18:23.127041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.941 "name": "raid_bdev1", 00:21:41.941 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:41.941 "strip_size_kb": 0, 00:21:41.941 "state": "online", 00:21:41.941 "raid_level": "raid1", 00:21:41.941 "superblock": true, 00:21:41.941 "num_base_bdevs": 2, 00:21:41.941 "num_base_bdevs_discovered": 2, 00:21:41.941 "num_base_bdevs_operational": 2, 00:21:41.941 "process": { 00:21:41.941 "type": "rebuild", 00:21:41.941 "target": "spare", 00:21:41.941 "progress": { 00:21:41.941 "blocks": 2560, 00:21:41.941 "percent": 32 00:21:41.941 } 00:21:41.941 }, 00:21:41.941 "base_bdevs_list": [ 00:21:41.941 { 00:21:41.941 "name": "spare", 00:21:41.941 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:41.941 "is_configured": true, 00:21:41.941 "data_offset": 256, 00:21:41.941 "data_size": 7936 00:21:41.941 }, 00:21:41.941 { 00:21:41.941 "name": "BaseBdev2", 00:21:41.941 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:41.941 "is_configured": true, 00:21:41.941 "data_offset": 256, 00:21:41.941 "data_size": 7936 00:21:41.941 } 00:21:41.941 ] 00:21:41.941 }' 00:21:41.941 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.200 [2024-11-20 07:18:24.270423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:42.200 [2024-11-20 07:18:24.333589] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:42.200 [2024-11-20 07:18:24.333744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.200 [2024-11-20 07:18:24.333765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:42.200 [2024-11-20 07:18:24.333777] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.200 "name": "raid_bdev1", 00:21:42.200 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:42.200 "strip_size_kb": 0, 00:21:42.200 "state": "online", 00:21:42.200 "raid_level": "raid1", 00:21:42.200 "superblock": true, 00:21:42.200 "num_base_bdevs": 2, 00:21:42.200 "num_base_bdevs_discovered": 1, 00:21:42.200 "num_base_bdevs_operational": 1, 00:21:42.200 "base_bdevs_list": [ 00:21:42.200 { 00:21:42.200 "name": null, 00:21:42.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.200 "is_configured": false, 00:21:42.200 "data_offset": 0, 00:21:42.200 "data_size": 7936 00:21:42.200 }, 00:21:42.200 { 00:21:42.200 "name": "BaseBdev2", 00:21:42.200 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:42.200 "is_configured": true, 00:21:42.200 "data_offset": 256, 00:21:42.200 "data_size": 7936 00:21:42.200 } 00:21:42.200 ] 00:21:42.200 }' 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.200 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.827 "name": "raid_bdev1", 00:21:42.827 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:42.827 "strip_size_kb": 0, 00:21:42.827 "state": "online", 00:21:42.827 "raid_level": "raid1", 00:21:42.827 "superblock": true, 00:21:42.827 "num_base_bdevs": 2, 00:21:42.827 "num_base_bdevs_discovered": 1, 00:21:42.827 "num_base_bdevs_operational": 1, 00:21:42.827 "base_bdevs_list": [ 00:21:42.827 { 00:21:42.827 "name": null, 00:21:42.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.827 "is_configured": false, 00:21:42.827 "data_offset": 0, 00:21:42.827 "data_size": 7936 00:21:42.827 }, 00:21:42.827 { 00:21:42.827 "name": "BaseBdev2", 00:21:42.827 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:42.827 "is_configured": true, 00:21:42.827 "data_offset": 256, 00:21:42.827 "data_size": 7936 00:21:42.827 } 00:21:42.827 ] 00:21:42.827 }' 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.827 [2024-11-20 07:18:24.936866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.827 [2024-11-20 07:18:24.954097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.827 07:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:42.827 [2024-11-20 07:18:24.956202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.764 07:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.764 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.764 "name": "raid_bdev1", 00:21:43.764 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:43.764 "strip_size_kb": 0, 00:21:43.764 "state": "online", 00:21:43.764 "raid_level": "raid1", 00:21:43.764 "superblock": true, 00:21:43.764 "num_base_bdevs": 2, 00:21:43.764 "num_base_bdevs_discovered": 2, 00:21:43.764 "num_base_bdevs_operational": 2, 00:21:43.764 "process": { 00:21:43.764 "type": "rebuild", 00:21:43.764 "target": "spare", 00:21:43.764 "progress": { 00:21:43.764 "blocks": 2560, 00:21:43.764 "percent": 32 00:21:43.764 } 00:21:43.764 }, 00:21:43.764 "base_bdevs_list": [ 00:21:43.764 { 00:21:43.764 "name": "spare", 00:21:43.764 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:43.764 "is_configured": true, 00:21:43.764 "data_offset": 256, 00:21:43.764 "data_size": 7936 00:21:43.764 }, 00:21:43.764 { 00:21:43.764 "name": "BaseBdev2", 00:21:43.764 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:43.764 "is_configured": true, 00:21:43.764 "data_offset": 256, 00:21:43.764 "data_size": 7936 00:21:43.764 } 00:21:43.764 ] 00:21:43.764 }' 00:21:43.764 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:44.023 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.023 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.023 "name": "raid_bdev1", 00:21:44.023 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:44.023 "strip_size_kb": 0, 00:21:44.023 "state": "online", 00:21:44.023 "raid_level": "raid1", 00:21:44.023 "superblock": true, 00:21:44.023 "num_base_bdevs": 2, 00:21:44.023 "num_base_bdevs_discovered": 2, 00:21:44.023 "num_base_bdevs_operational": 2, 00:21:44.023 "process": { 00:21:44.023 "type": "rebuild", 00:21:44.023 "target": "spare", 00:21:44.023 "progress": { 00:21:44.023 "blocks": 2816, 00:21:44.023 "percent": 35 00:21:44.023 } 00:21:44.023 }, 00:21:44.023 "base_bdevs_list": [ 00:21:44.023 { 00:21:44.023 "name": "spare", 00:21:44.023 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:44.023 "is_configured": true, 00:21:44.023 "data_offset": 256, 00:21:44.024 "data_size": 7936 00:21:44.024 }, 00:21:44.024 { 00:21:44.024 "name": "BaseBdev2", 00:21:44.024 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:44.024 "is_configured": true, 00:21:44.024 "data_offset": 256, 00:21:44.024 "data_size": 7936 00:21:44.024 } 00:21:44.024 ] 00:21:44.024 }' 00:21:44.024 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.024 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.024 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.024 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.024 07:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.959 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.218 "name": "raid_bdev1", 00:21:45.218 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:45.218 "strip_size_kb": 0, 00:21:45.218 "state": "online", 00:21:45.218 "raid_level": "raid1", 00:21:45.218 "superblock": true, 00:21:45.218 "num_base_bdevs": 2, 00:21:45.218 "num_base_bdevs_discovered": 2, 00:21:45.218 "num_base_bdevs_operational": 2, 00:21:45.218 "process": { 00:21:45.218 "type": "rebuild", 00:21:45.218 "target": "spare", 00:21:45.218 "progress": { 00:21:45.218 "blocks": 5632, 00:21:45.218 "percent": 70 00:21:45.218 } 00:21:45.218 }, 00:21:45.218 "base_bdevs_list": [ 00:21:45.218 { 00:21:45.218 "name": "spare", 00:21:45.218 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:45.218 "is_configured": true, 00:21:45.218 "data_offset": 256, 00:21:45.218 "data_size": 7936 00:21:45.218 }, 00:21:45.218 { 00:21:45.218 "name": "BaseBdev2", 00:21:45.218 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:45.218 "is_configured": true, 00:21:45.218 "data_offset": 256, 00:21:45.218 "data_size": 7936 00:21:45.218 } 00:21:45.218 ] 00:21:45.218 }' 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.218 07:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:46.155 [2024-11-20 07:18:28.072023] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:46.155 [2024-11-20 07:18:28.072227] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:46.155 [2024-11-20 07:18:28.072406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.155 "name": "raid_bdev1", 00:21:46.155 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:46.155 "strip_size_kb": 0, 00:21:46.155 "state": "online", 00:21:46.155 "raid_level": "raid1", 00:21:46.155 "superblock": true, 00:21:46.155 "num_base_bdevs": 2, 00:21:46.155 "num_base_bdevs_discovered": 2, 00:21:46.155 "num_base_bdevs_operational": 2, 00:21:46.155 "base_bdevs_list": [ 00:21:46.155 { 00:21:46.155 "name": "spare", 00:21:46.155 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:46.155 "is_configured": true, 00:21:46.155 "data_offset": 256, 00:21:46.155 "data_size": 7936 00:21:46.155 }, 00:21:46.155 { 00:21:46.155 "name": "BaseBdev2", 00:21:46.155 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:46.155 "is_configured": true, 00:21:46.155 "data_offset": 256, 00:21:46.155 "data_size": 7936 00:21:46.155 } 00:21:46.155 ] 00:21:46.155 }' 00:21:46.155 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.417 "name": "raid_bdev1", 00:21:46.417 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:46.417 "strip_size_kb": 0, 00:21:46.417 "state": "online", 00:21:46.417 "raid_level": "raid1", 00:21:46.417 "superblock": true, 00:21:46.417 "num_base_bdevs": 2, 00:21:46.417 "num_base_bdevs_discovered": 2, 00:21:46.417 "num_base_bdevs_operational": 2, 00:21:46.417 "base_bdevs_list": [ 00:21:46.417 { 00:21:46.417 "name": "spare", 00:21:46.417 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:46.417 "is_configured": true, 00:21:46.417 "data_offset": 256, 00:21:46.417 "data_size": 7936 00:21:46.417 }, 00:21:46.417 { 00:21:46.417 "name": "BaseBdev2", 00:21:46.417 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:46.417 "is_configured": true, 00:21:46.417 "data_offset": 256, 00:21:46.417 "data_size": 7936 00:21:46.417 } 00:21:46.417 ] 00:21:46.417 }' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.417 "name": "raid_bdev1", 00:21:46.417 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:46.417 "strip_size_kb": 0, 00:21:46.417 "state": "online", 00:21:46.417 "raid_level": "raid1", 00:21:46.417 "superblock": true, 00:21:46.417 "num_base_bdevs": 2, 00:21:46.417 "num_base_bdevs_discovered": 2, 00:21:46.417 "num_base_bdevs_operational": 2, 00:21:46.417 "base_bdevs_list": [ 00:21:46.417 { 00:21:46.417 "name": "spare", 00:21:46.417 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:46.417 "is_configured": true, 00:21:46.417 "data_offset": 256, 00:21:46.417 "data_size": 7936 00:21:46.417 }, 00:21:46.417 { 00:21:46.417 "name": "BaseBdev2", 00:21:46.417 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:46.417 "is_configured": true, 00:21:46.417 "data_offset": 256, 00:21:46.417 "data_size": 7936 00:21:46.417 } 00:21:46.417 ] 00:21:46.417 }' 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.417 07:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.984 [2024-11-20 07:18:29.080556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.984 [2024-11-20 07:18:29.080672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.984 [2024-11-20 07:18:29.080804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.984 [2024-11-20 07:18:29.080885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.984 [2024-11-20 07:18:29.080898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:46.984 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:47.263 /dev/nbd0 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.263 1+0 records in 00:21:47.263 1+0 records out 00:21:47.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345391 s, 11.9 MB/s 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:47.263 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:47.555 /dev/nbd1 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.555 1+0 records in 00:21:47.555 1+0 records out 00:21:47.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424591 s, 9.6 MB/s 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:47.555 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.814 07:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.073 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.333 [2024-11-20 07:18:30.417281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.333 [2024-11-20 07:18:30.417363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.333 [2024-11-20 07:18:30.417390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:48.333 [2024-11-20 07:18:30.417400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.333 [2024-11-20 07:18:30.419619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.333 [2024-11-20 07:18:30.419726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.333 [2024-11-20 07:18:30.419818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:48.333 [2024-11-20 07:18:30.419878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.333 [2024-11-20 07:18:30.420045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.333 spare 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.333 [2024-11-20 07:18:30.519953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:48.333 [2024-11-20 07:18:30.520017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:48.333 [2024-11-20 07:18:30.520175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:48.333 [2024-11-20 07:18:30.520406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:48.333 [2024-11-20 07:18:30.520417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:48.333 [2024-11-20 07:18:30.520577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.333 "name": "raid_bdev1", 00:21:48.333 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:48.333 "strip_size_kb": 0, 00:21:48.333 "state": "online", 00:21:48.333 "raid_level": "raid1", 00:21:48.333 "superblock": true, 00:21:48.333 "num_base_bdevs": 2, 00:21:48.333 "num_base_bdevs_discovered": 2, 00:21:48.333 "num_base_bdevs_operational": 2, 00:21:48.333 "base_bdevs_list": [ 00:21:48.333 { 00:21:48.333 "name": "spare", 00:21:48.333 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:48.333 "is_configured": true, 00:21:48.333 "data_offset": 256, 00:21:48.333 "data_size": 7936 00:21:48.333 }, 00:21:48.333 { 00:21:48.333 "name": "BaseBdev2", 00:21:48.333 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:48.333 "is_configured": true, 00:21:48.333 "data_offset": 256, 00:21:48.333 "data_size": 7936 00:21:48.333 } 00:21:48.333 ] 00:21:48.333 }' 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.333 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.903 "name": "raid_bdev1", 00:21:48.903 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:48.903 "strip_size_kb": 0, 00:21:48.903 "state": "online", 00:21:48.903 "raid_level": "raid1", 00:21:48.903 "superblock": true, 00:21:48.903 "num_base_bdevs": 2, 00:21:48.903 "num_base_bdevs_discovered": 2, 00:21:48.903 "num_base_bdevs_operational": 2, 00:21:48.903 "base_bdevs_list": [ 00:21:48.903 { 00:21:48.903 "name": "spare", 00:21:48.903 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:48.903 "is_configured": true, 00:21:48.903 "data_offset": 256, 00:21:48.903 "data_size": 7936 00:21:48.903 }, 00:21:48.903 { 00:21:48.903 "name": "BaseBdev2", 00:21:48.903 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:48.903 "is_configured": true, 00:21:48.903 "data_offset": 256, 00:21:48.903 "data_size": 7936 00:21:48.903 } 00:21:48.903 ] 00:21:48.903 }' 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.903 07:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.903 [2024-11-20 07:18:31.108208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.903 "name": "raid_bdev1", 00:21:48.903 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:48.903 "strip_size_kb": 0, 00:21:48.903 "state": "online", 00:21:48.903 "raid_level": "raid1", 00:21:48.903 "superblock": true, 00:21:48.903 "num_base_bdevs": 2, 00:21:48.903 "num_base_bdevs_discovered": 1, 00:21:48.903 "num_base_bdevs_operational": 1, 00:21:48.903 "base_bdevs_list": [ 00:21:48.903 { 00:21:48.903 "name": null, 00:21:48.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.903 "is_configured": false, 00:21:48.903 "data_offset": 0, 00:21:48.903 "data_size": 7936 00:21:48.903 }, 00:21:48.903 { 00:21:48.903 "name": "BaseBdev2", 00:21:48.903 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:48.903 "is_configured": true, 00:21:48.903 "data_offset": 256, 00:21:48.903 "data_size": 7936 00:21:48.903 } 00:21:48.903 ] 00:21:48.903 }' 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.903 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.473 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.473 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.473 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.473 [2024-11-20 07:18:31.567456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.473 [2024-11-20 07:18:31.567769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:49.473 [2024-11-20 07:18:31.567844] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:49.473 [2024-11-20 07:18:31.567915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.473 [2024-11-20 07:18:31.583259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:49.473 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.473 07:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:49.473 [2024-11-20 07:18:31.585428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.411 "name": "raid_bdev1", 00:21:50.411 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:50.411 "strip_size_kb": 0, 00:21:50.411 "state": "online", 00:21:50.411 "raid_level": "raid1", 00:21:50.411 "superblock": true, 00:21:50.411 "num_base_bdevs": 2, 00:21:50.411 "num_base_bdevs_discovered": 2, 00:21:50.411 "num_base_bdevs_operational": 2, 00:21:50.411 "process": { 00:21:50.411 "type": "rebuild", 00:21:50.411 "target": "spare", 00:21:50.411 "progress": { 00:21:50.411 "blocks": 2560, 00:21:50.411 "percent": 32 00:21:50.411 } 00:21:50.411 }, 00:21:50.411 "base_bdevs_list": [ 00:21:50.411 { 00:21:50.411 "name": "spare", 00:21:50.411 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:50.411 "is_configured": true, 00:21:50.411 "data_offset": 256, 00:21:50.411 "data_size": 7936 00:21:50.411 }, 00:21:50.411 { 00:21:50.411 "name": "BaseBdev2", 00:21:50.411 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:50.411 "is_configured": true, 00:21:50.411 "data_offset": 256, 00:21:50.411 "data_size": 7936 00:21:50.411 } 00:21:50.411 ] 00:21:50.411 }' 00:21:50.411 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:50.671 [2024-11-20 07:18:32.753277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.671 [2024-11-20 07:18:32.791636] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:50.671 [2024-11-20 07:18:32.791846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.671 [2024-11-20 07:18:32.791865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.671 [2024-11-20 07:18:32.791895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.671 "name": "raid_bdev1", 00:21:50.671 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:50.671 "strip_size_kb": 0, 00:21:50.671 "state": "online", 00:21:50.671 "raid_level": "raid1", 00:21:50.671 "superblock": true, 00:21:50.671 "num_base_bdevs": 2, 00:21:50.671 "num_base_bdevs_discovered": 1, 00:21:50.671 "num_base_bdevs_operational": 1, 00:21:50.671 "base_bdevs_list": [ 00:21:50.671 { 00:21:50.671 "name": null, 00:21:50.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.671 "is_configured": false, 00:21:50.671 "data_offset": 0, 00:21:50.671 "data_size": 7936 00:21:50.671 }, 00:21:50.671 { 00:21:50.671 "name": "BaseBdev2", 00:21:50.671 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:50.671 "is_configured": true, 00:21:50.671 "data_offset": 256, 00:21:50.671 "data_size": 7936 00:21:50.671 } 00:21:50.671 ] 00:21:50.671 }' 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.671 07:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.239 07:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:51.239 07:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.239 07:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.239 [2024-11-20 07:18:33.289628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:51.239 [2024-11-20 07:18:33.289787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.239 [2024-11-20 07:18:33.289847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:51.239 [2024-11-20 07:18:33.289888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.239 [2024-11-20 07:18:33.290216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.239 [2024-11-20 07:18:33.290289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:51.239 [2024-11-20 07:18:33.290420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:51.239 [2024-11-20 07:18:33.290470] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:51.239 [2024-11-20 07:18:33.290517] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:51.239 [2024-11-20 07:18:33.290597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.239 [2024-11-20 07:18:33.307628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:51.239 spare 00:21:51.239 07:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.239 07:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:51.239 [2024-11-20 07:18:33.309974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.190 "name": "raid_bdev1", 00:21:52.190 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:52.190 "strip_size_kb": 0, 00:21:52.190 "state": "online", 00:21:52.190 "raid_level": "raid1", 00:21:52.190 "superblock": true, 00:21:52.190 "num_base_bdevs": 2, 00:21:52.190 "num_base_bdevs_discovered": 2, 00:21:52.190 "num_base_bdevs_operational": 2, 00:21:52.190 "process": { 00:21:52.190 "type": "rebuild", 00:21:52.190 "target": "spare", 00:21:52.190 "progress": { 00:21:52.190 "blocks": 2560, 00:21:52.190 "percent": 32 00:21:52.190 } 00:21:52.190 }, 00:21:52.190 "base_bdevs_list": [ 00:21:52.190 { 00:21:52.190 "name": "spare", 00:21:52.190 "uuid": "81934d8d-a29e-5480-8dc0-8492a7f9d2bd", 00:21:52.190 "is_configured": true, 00:21:52.190 "data_offset": 256, 00:21:52.190 "data_size": 7936 00:21:52.190 }, 00:21:52.190 { 00:21:52.190 "name": "BaseBdev2", 00:21:52.190 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:52.190 "is_configured": true, 00:21:52.190 "data_offset": 256, 00:21:52.190 "data_size": 7936 00:21:52.190 } 00:21:52.190 ] 00:21:52.190 }' 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.190 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.453 [2024-11-20 07:18:34.457803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.453 [2024-11-20 07:18:34.516095] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.453 [2024-11-20 07:18:34.516288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.453 [2024-11-20 07:18:34.516331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.453 [2024-11-20 07:18:34.516364] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.453 "name": "raid_bdev1", 00:21:52.453 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:52.453 "strip_size_kb": 0, 00:21:52.453 "state": "online", 00:21:52.453 "raid_level": "raid1", 00:21:52.453 "superblock": true, 00:21:52.453 "num_base_bdevs": 2, 00:21:52.453 "num_base_bdevs_discovered": 1, 00:21:52.453 "num_base_bdevs_operational": 1, 00:21:52.453 "base_bdevs_list": [ 00:21:52.453 { 00:21:52.453 "name": null, 00:21:52.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.453 "is_configured": false, 00:21:52.453 "data_offset": 0, 00:21:52.453 "data_size": 7936 00:21:52.453 }, 00:21:52.453 { 00:21:52.453 "name": "BaseBdev2", 00:21:52.453 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:52.453 "is_configured": true, 00:21:52.453 "data_offset": 256, 00:21:52.453 "data_size": 7936 00:21:52.453 } 00:21:52.453 ] 00:21:52.453 }' 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.453 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.712 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.971 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.971 "name": "raid_bdev1", 00:21:52.971 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:52.971 "strip_size_kb": 0, 00:21:52.971 "state": "online", 00:21:52.971 "raid_level": "raid1", 00:21:52.971 "superblock": true, 00:21:52.971 "num_base_bdevs": 2, 00:21:52.971 "num_base_bdevs_discovered": 1, 00:21:52.971 "num_base_bdevs_operational": 1, 00:21:52.971 "base_bdevs_list": [ 00:21:52.971 { 00:21:52.971 "name": null, 00:21:52.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.971 "is_configured": false, 00:21:52.971 "data_offset": 0, 00:21:52.971 "data_size": 7936 00:21:52.971 }, 00:21:52.971 { 00:21:52.971 "name": "BaseBdev2", 00:21:52.971 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:52.971 "is_configured": true, 00:21:52.971 "data_offset": 256, 00:21:52.971 "data_size": 7936 00:21:52.971 } 00:21:52.971 ] 00:21:52.971 }' 00:21:52.971 07:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.971 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.971 [2024-11-20 07:18:35.108927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:52.971 [2024-11-20 07:18:35.109062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.971 [2024-11-20 07:18:35.109092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:52.971 [2024-11-20 07:18:35.109118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.971 [2024-11-20 07:18:35.109377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.971 [2024-11-20 07:18:35.109392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:52.971 [2024-11-20 07:18:35.109453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:52.971 [2024-11-20 07:18:35.109468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:52.972 [2024-11-20 07:18:35.109478] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:52.972 [2024-11-20 07:18:35.109492] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:52.972 BaseBdev1 00:21:52.972 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.972 07:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:53.907 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.907 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.907 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.907 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.907 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.908 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.167 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.167 "name": "raid_bdev1", 00:21:54.167 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:54.167 "strip_size_kb": 0, 00:21:54.167 "state": "online", 00:21:54.167 "raid_level": "raid1", 00:21:54.167 "superblock": true, 00:21:54.167 "num_base_bdevs": 2, 00:21:54.167 "num_base_bdevs_discovered": 1, 00:21:54.167 "num_base_bdevs_operational": 1, 00:21:54.167 "base_bdevs_list": [ 00:21:54.167 { 00:21:54.167 "name": null, 00:21:54.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.167 "is_configured": false, 00:21:54.167 "data_offset": 0, 00:21:54.167 "data_size": 7936 00:21:54.167 }, 00:21:54.167 { 00:21:54.167 "name": "BaseBdev2", 00:21:54.167 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:54.167 "is_configured": true, 00:21:54.167 "data_offset": 256, 00:21:54.167 "data_size": 7936 00:21:54.167 } 00:21:54.167 ] 00:21:54.167 }' 00:21:54.167 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.167 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.427 "name": "raid_bdev1", 00:21:54.427 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:54.427 "strip_size_kb": 0, 00:21:54.427 "state": "online", 00:21:54.427 "raid_level": "raid1", 00:21:54.427 "superblock": true, 00:21:54.427 "num_base_bdevs": 2, 00:21:54.427 "num_base_bdevs_discovered": 1, 00:21:54.427 "num_base_bdevs_operational": 1, 00:21:54.427 "base_bdevs_list": [ 00:21:54.427 { 00:21:54.427 "name": null, 00:21:54.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.427 "is_configured": false, 00:21:54.427 "data_offset": 0, 00:21:54.427 "data_size": 7936 00:21:54.427 }, 00:21:54.427 { 00:21:54.427 "name": "BaseBdev2", 00:21:54.427 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:54.427 "is_configured": true, 00:21:54.427 "data_offset": 256, 00:21:54.427 "data_size": 7936 00:21:54.427 } 00:21:54.427 ] 00:21:54.427 }' 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.427 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.687 [2024-11-20 07:18:36.706469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.687 [2024-11-20 07:18:36.706715] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:54.687 [2024-11-20 07:18:36.706788] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:54.687 request: 00:21:54.687 { 00:21:54.687 "base_bdev": "BaseBdev1", 00:21:54.687 "raid_bdev": "raid_bdev1", 00:21:54.687 "method": "bdev_raid_add_base_bdev", 00:21:54.687 "req_id": 1 00:21:54.687 } 00:21:54.687 Got JSON-RPC error response 00:21:54.687 response: 00:21:54.687 { 00:21:54.687 "code": -22, 00:21:54.687 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:54.687 } 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.687 07:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.625 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.626 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.626 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.626 "name": "raid_bdev1", 00:21:55.626 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:55.626 "strip_size_kb": 0, 00:21:55.626 "state": "online", 00:21:55.626 "raid_level": "raid1", 00:21:55.626 "superblock": true, 00:21:55.626 "num_base_bdevs": 2, 00:21:55.626 "num_base_bdevs_discovered": 1, 00:21:55.626 "num_base_bdevs_operational": 1, 00:21:55.626 "base_bdevs_list": [ 00:21:55.626 { 00:21:55.626 "name": null, 00:21:55.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.626 "is_configured": false, 00:21:55.626 "data_offset": 0, 00:21:55.626 "data_size": 7936 00:21:55.626 }, 00:21:55.626 { 00:21:55.626 "name": "BaseBdev2", 00:21:55.626 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:55.626 "is_configured": true, 00:21:55.626 "data_offset": 256, 00:21:55.626 "data_size": 7936 00:21:55.626 } 00:21:55.626 ] 00:21:55.626 }' 00:21:55.626 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.626 07:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.195 "name": "raid_bdev1", 00:21:56.195 "uuid": "101a0e7a-9744-40d5-aa2f-9271500dc40b", 00:21:56.195 "strip_size_kb": 0, 00:21:56.195 "state": "online", 00:21:56.195 "raid_level": "raid1", 00:21:56.195 "superblock": true, 00:21:56.195 "num_base_bdevs": 2, 00:21:56.195 "num_base_bdevs_discovered": 1, 00:21:56.195 "num_base_bdevs_operational": 1, 00:21:56.195 "base_bdevs_list": [ 00:21:56.195 { 00:21:56.195 "name": null, 00:21:56.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.195 "is_configured": false, 00:21:56.195 "data_offset": 0, 00:21:56.195 "data_size": 7936 00:21:56.195 }, 00:21:56.195 { 00:21:56.195 "name": "BaseBdev2", 00:21:56.195 "uuid": "f5be8dfd-28e1-563c-8ce4-bb30c0faa2f1", 00:21:56.195 "is_configured": true, 00:21:56.195 "data_offset": 256, 00:21:56.195 "data_size": 7936 00:21:56.195 } 00:21:56.195 ] 00:21:56.195 }' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88301 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88301 ']' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88301 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.195 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88301 00:21:56.195 killing process with pid 88301 00:21:56.196 Received shutdown signal, test time was about 60.000000 seconds 00:21:56.196 00:21:56.196 Latency(us) 00:21:56.196 [2024-11-20T07:18:38.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.196 [2024-11-20T07:18:38.461Z] =================================================================================================================== 00:21:56.196 [2024-11-20T07:18:38.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.196 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.196 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.196 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88301' 00:21:56.196 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88301 00:21:56.196 [2024-11-20 07:18:38.384245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:56.196 07:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88301 00:21:56.196 [2024-11-20 07:18:38.384562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.196 [2024-11-20 07:18:38.384704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.196 [2024-11-20 07:18:38.384728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:56.768 [2024-11-20 07:18:38.775570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.167 07:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:58.167 00:21:58.167 real 0m20.532s 00:21:58.167 user 0m26.770s 00:21:58.167 sys 0m2.779s 00:21:58.167 ************************************ 00:21:58.167 END TEST raid_rebuild_test_sb_md_separate 00:21:58.167 ************************************ 00:21:58.167 07:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.167 07:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:58.167 07:18:40 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:58.167 07:18:40 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:58.167 07:18:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.167 07:18:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.167 07:18:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:58.167 ************************************ 00:21:58.167 START TEST raid_state_function_test_sb_md_interleaved 00:21:58.167 ************************************ 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88993 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88993' 00:21:58.167 Process raid pid: 88993 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88993 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88993 ']' 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.167 07:18:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.167 [2024-11-20 07:18:40.225450] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:58.167 [2024-11-20 07:18:40.225680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.167 [2024-11-20 07:18:40.408418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.427 [2024-11-20 07:18:40.545796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.687 [2024-11-20 07:18:40.793486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.687 [2024-11-20 07:18:40.793629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.946 [2024-11-20 07:18:41.148100] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.946 [2024-11-20 07:18:41.148232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.946 [2024-11-20 07:18:41.148271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.946 [2024-11-20 07:18:41.148300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.946 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.946 "name": "Existed_Raid", 00:21:58.946 "uuid": "872dbc5f-4fe8-4eaf-8019-381bfb044d81", 00:21:58.946 "strip_size_kb": 0, 00:21:58.946 "state": "configuring", 00:21:58.946 "raid_level": "raid1", 00:21:58.946 "superblock": true, 00:21:58.946 "num_base_bdevs": 2, 00:21:58.946 "num_base_bdevs_discovered": 0, 00:21:58.946 "num_base_bdevs_operational": 2, 00:21:58.946 "base_bdevs_list": [ 00:21:58.946 { 00:21:58.946 "name": "BaseBdev1", 00:21:58.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.946 "is_configured": false, 00:21:58.946 "data_offset": 0, 00:21:58.946 "data_size": 0 00:21:58.946 }, 00:21:58.946 { 00:21:58.946 "name": "BaseBdev2", 00:21:58.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.947 "is_configured": false, 00:21:58.947 "data_offset": 0, 00:21:58.947 "data_size": 0 00:21:58.947 } 00:21:58.947 ] 00:21:58.947 }' 00:21:58.947 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.947 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.515 [2024-11-20 07:18:41.623362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:59.515 [2024-11-20 07:18:41.623405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.515 [2024-11-20 07:18:41.635367] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.515 [2024-11-20 07:18:41.635492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.515 [2024-11-20 07:18:41.635508] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.515 [2024-11-20 07:18:41.635522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.515 [2024-11-20 07:18:41.688909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.515 BaseBdev1 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:59.515 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.516 [ 00:21:59.516 { 00:21:59.516 "name": "BaseBdev1", 00:21:59.516 "aliases": [ 00:21:59.516 "6a3d327e-1134-4a60-97ed-2e946fa83143" 00:21:59.516 ], 00:21:59.516 "product_name": "Malloc disk", 00:21:59.516 "block_size": 4128, 00:21:59.516 "num_blocks": 8192, 00:21:59.516 "uuid": "6a3d327e-1134-4a60-97ed-2e946fa83143", 00:21:59.516 "md_size": 32, 00:21:59.516 "md_interleave": true, 00:21:59.516 "dif_type": 0, 00:21:59.516 "assigned_rate_limits": { 00:21:59.516 "rw_ios_per_sec": 0, 00:21:59.516 "rw_mbytes_per_sec": 0, 00:21:59.516 "r_mbytes_per_sec": 0, 00:21:59.516 "w_mbytes_per_sec": 0 00:21:59.516 }, 00:21:59.516 "claimed": true, 00:21:59.516 "claim_type": "exclusive_write", 00:21:59.516 "zoned": false, 00:21:59.516 "supported_io_types": { 00:21:59.516 "read": true, 00:21:59.516 "write": true, 00:21:59.516 "unmap": true, 00:21:59.516 "flush": true, 00:21:59.516 "reset": true, 00:21:59.516 "nvme_admin": false, 00:21:59.516 "nvme_io": false, 00:21:59.516 "nvme_io_md": false, 00:21:59.516 "write_zeroes": true, 00:21:59.516 "zcopy": true, 00:21:59.516 "get_zone_info": false, 00:21:59.516 "zone_management": false, 00:21:59.516 "zone_append": false, 00:21:59.516 "compare": false, 00:21:59.516 "compare_and_write": false, 00:21:59.516 "abort": true, 00:21:59.516 "seek_hole": false, 00:21:59.516 "seek_data": false, 00:21:59.516 "copy": true, 00:21:59.516 "nvme_iov_md": false 00:21:59.516 }, 00:21:59.516 "memory_domains": [ 00:21:59.516 { 00:21:59.516 "dma_device_id": "system", 00:21:59.516 "dma_device_type": 1 00:21:59.516 }, 00:21:59.516 { 00:21:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.516 "dma_device_type": 2 00:21:59.516 } 00:21:59.516 ], 00:21:59.516 "driver_specific": {} 00:21:59.516 } 00:21:59.516 ] 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.516 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.775 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.775 "name": "Existed_Raid", 00:21:59.775 "uuid": "cfbce0b6-0419-49d5-85b0-f5d6b414da9a", 00:21:59.775 "strip_size_kb": 0, 00:21:59.775 "state": "configuring", 00:21:59.775 "raid_level": "raid1", 00:21:59.775 "superblock": true, 00:21:59.775 "num_base_bdevs": 2, 00:21:59.775 "num_base_bdevs_discovered": 1, 00:21:59.775 "num_base_bdevs_operational": 2, 00:21:59.775 "base_bdevs_list": [ 00:21:59.775 { 00:21:59.775 "name": "BaseBdev1", 00:21:59.775 "uuid": "6a3d327e-1134-4a60-97ed-2e946fa83143", 00:21:59.775 "is_configured": true, 00:21:59.775 "data_offset": 256, 00:21:59.775 "data_size": 7936 00:21:59.775 }, 00:21:59.775 { 00:21:59.775 "name": "BaseBdev2", 00:21:59.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.775 "is_configured": false, 00:21:59.775 "data_offset": 0, 00:21:59.775 "data_size": 0 00:21:59.775 } 00:21:59.775 ] 00:21:59.775 }' 00:21:59.775 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.775 07:18:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.033 [2024-11-20 07:18:42.188259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.033 [2024-11-20 07:18:42.188396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.033 [2024-11-20 07:18:42.200327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.033 [2024-11-20 07:18:42.202461] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:00.033 [2024-11-20 07:18:42.202503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.033 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.034 "name": "Existed_Raid", 00:22:00.034 "uuid": "f872b8f7-a885-4f10-967a-99043580f301", 00:22:00.034 "strip_size_kb": 0, 00:22:00.034 "state": "configuring", 00:22:00.034 "raid_level": "raid1", 00:22:00.034 "superblock": true, 00:22:00.034 "num_base_bdevs": 2, 00:22:00.034 "num_base_bdevs_discovered": 1, 00:22:00.034 "num_base_bdevs_operational": 2, 00:22:00.034 "base_bdevs_list": [ 00:22:00.034 { 00:22:00.034 "name": "BaseBdev1", 00:22:00.034 "uuid": "6a3d327e-1134-4a60-97ed-2e946fa83143", 00:22:00.034 "is_configured": true, 00:22:00.034 "data_offset": 256, 00:22:00.034 "data_size": 7936 00:22:00.034 }, 00:22:00.034 { 00:22:00.034 "name": "BaseBdev2", 00:22:00.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.034 "is_configured": false, 00:22:00.034 "data_offset": 0, 00:22:00.034 "data_size": 0 00:22:00.034 } 00:22:00.034 ] 00:22:00.034 }' 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.034 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.604 [2024-11-20 07:18:42.723213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.604 [2024-11-20 07:18:42.723595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:00.604 [2024-11-20 07:18:42.723652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:00.604 [2024-11-20 07:18:42.723808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:00.604 [2024-11-20 07:18:42.723933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:00.604 [2024-11-20 07:18:42.723976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:00.604 [2024-11-20 07:18:42.724101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.604 BaseBdev2 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.604 [ 00:22:00.604 { 00:22:00.604 "name": "BaseBdev2", 00:22:00.604 "aliases": [ 00:22:00.604 "61595787-37e0-4a1d-b6eb-5a0d10cb31c0" 00:22:00.604 ], 00:22:00.604 "product_name": "Malloc disk", 00:22:00.604 "block_size": 4128, 00:22:00.604 "num_blocks": 8192, 00:22:00.604 "uuid": "61595787-37e0-4a1d-b6eb-5a0d10cb31c0", 00:22:00.604 "md_size": 32, 00:22:00.604 "md_interleave": true, 00:22:00.604 "dif_type": 0, 00:22:00.604 "assigned_rate_limits": { 00:22:00.604 "rw_ios_per_sec": 0, 00:22:00.604 "rw_mbytes_per_sec": 0, 00:22:00.604 "r_mbytes_per_sec": 0, 00:22:00.604 "w_mbytes_per_sec": 0 00:22:00.604 }, 00:22:00.604 "claimed": true, 00:22:00.604 "claim_type": "exclusive_write", 00:22:00.604 "zoned": false, 00:22:00.604 "supported_io_types": { 00:22:00.604 "read": true, 00:22:00.604 "write": true, 00:22:00.604 "unmap": true, 00:22:00.604 "flush": true, 00:22:00.604 "reset": true, 00:22:00.604 "nvme_admin": false, 00:22:00.604 "nvme_io": false, 00:22:00.604 "nvme_io_md": false, 00:22:00.604 "write_zeroes": true, 00:22:00.604 "zcopy": true, 00:22:00.604 "get_zone_info": false, 00:22:00.604 "zone_management": false, 00:22:00.604 "zone_append": false, 00:22:00.604 "compare": false, 00:22:00.604 "compare_and_write": false, 00:22:00.604 "abort": true, 00:22:00.604 "seek_hole": false, 00:22:00.604 "seek_data": false, 00:22:00.604 "copy": true, 00:22:00.604 "nvme_iov_md": false 00:22:00.604 }, 00:22:00.604 "memory_domains": [ 00:22:00.604 { 00:22:00.604 "dma_device_id": "system", 00:22:00.604 "dma_device_type": 1 00:22:00.604 }, 00:22:00.604 { 00:22:00.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.604 "dma_device_type": 2 00:22:00.604 } 00:22:00.604 ], 00:22:00.604 "driver_specific": {} 00:22:00.604 } 00:22:00.604 ] 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:00.604 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.605 "name": "Existed_Raid", 00:22:00.605 "uuid": "f872b8f7-a885-4f10-967a-99043580f301", 00:22:00.605 "strip_size_kb": 0, 00:22:00.605 "state": "online", 00:22:00.605 "raid_level": "raid1", 00:22:00.605 "superblock": true, 00:22:00.605 "num_base_bdevs": 2, 00:22:00.605 "num_base_bdevs_discovered": 2, 00:22:00.605 "num_base_bdevs_operational": 2, 00:22:00.605 "base_bdevs_list": [ 00:22:00.605 { 00:22:00.605 "name": "BaseBdev1", 00:22:00.605 "uuid": "6a3d327e-1134-4a60-97ed-2e946fa83143", 00:22:00.605 "is_configured": true, 00:22:00.605 "data_offset": 256, 00:22:00.605 "data_size": 7936 00:22:00.605 }, 00:22:00.605 { 00:22:00.605 "name": "BaseBdev2", 00:22:00.605 "uuid": "61595787-37e0-4a1d-b6eb-5a0d10cb31c0", 00:22:00.605 "is_configured": true, 00:22:00.605 "data_offset": 256, 00:22:00.605 "data_size": 7936 00:22:00.605 } 00:22:00.605 ] 00:22:00.605 }' 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.605 07:18:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.210 [2024-11-20 07:18:43.250760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.210 "name": "Existed_Raid", 00:22:01.210 "aliases": [ 00:22:01.210 "f872b8f7-a885-4f10-967a-99043580f301" 00:22:01.210 ], 00:22:01.210 "product_name": "Raid Volume", 00:22:01.210 "block_size": 4128, 00:22:01.210 "num_blocks": 7936, 00:22:01.210 "uuid": "f872b8f7-a885-4f10-967a-99043580f301", 00:22:01.210 "md_size": 32, 00:22:01.210 "md_interleave": true, 00:22:01.210 "dif_type": 0, 00:22:01.210 "assigned_rate_limits": { 00:22:01.210 "rw_ios_per_sec": 0, 00:22:01.210 "rw_mbytes_per_sec": 0, 00:22:01.210 "r_mbytes_per_sec": 0, 00:22:01.210 "w_mbytes_per_sec": 0 00:22:01.210 }, 00:22:01.210 "claimed": false, 00:22:01.210 "zoned": false, 00:22:01.210 "supported_io_types": { 00:22:01.210 "read": true, 00:22:01.210 "write": true, 00:22:01.210 "unmap": false, 00:22:01.210 "flush": false, 00:22:01.210 "reset": true, 00:22:01.210 "nvme_admin": false, 00:22:01.210 "nvme_io": false, 00:22:01.210 "nvme_io_md": false, 00:22:01.210 "write_zeroes": true, 00:22:01.210 "zcopy": false, 00:22:01.210 "get_zone_info": false, 00:22:01.210 "zone_management": false, 00:22:01.210 "zone_append": false, 00:22:01.210 "compare": false, 00:22:01.210 "compare_and_write": false, 00:22:01.210 "abort": false, 00:22:01.210 "seek_hole": false, 00:22:01.210 "seek_data": false, 00:22:01.210 "copy": false, 00:22:01.210 "nvme_iov_md": false 00:22:01.210 }, 00:22:01.210 "memory_domains": [ 00:22:01.210 { 00:22:01.210 "dma_device_id": "system", 00:22:01.210 "dma_device_type": 1 00:22:01.210 }, 00:22:01.210 { 00:22:01.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.210 "dma_device_type": 2 00:22:01.210 }, 00:22:01.210 { 00:22:01.210 "dma_device_id": "system", 00:22:01.210 "dma_device_type": 1 00:22:01.210 }, 00:22:01.210 { 00:22:01.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.210 "dma_device_type": 2 00:22:01.210 } 00:22:01.210 ], 00:22:01.210 "driver_specific": { 00:22:01.210 "raid": { 00:22:01.210 "uuid": "f872b8f7-a885-4f10-967a-99043580f301", 00:22:01.210 "strip_size_kb": 0, 00:22:01.210 "state": "online", 00:22:01.210 "raid_level": "raid1", 00:22:01.210 "superblock": true, 00:22:01.210 "num_base_bdevs": 2, 00:22:01.210 "num_base_bdevs_discovered": 2, 00:22:01.210 "num_base_bdevs_operational": 2, 00:22:01.210 "base_bdevs_list": [ 00:22:01.210 { 00:22:01.210 "name": "BaseBdev1", 00:22:01.210 "uuid": "6a3d327e-1134-4a60-97ed-2e946fa83143", 00:22:01.210 "is_configured": true, 00:22:01.210 "data_offset": 256, 00:22:01.210 "data_size": 7936 00:22:01.210 }, 00:22:01.210 { 00:22:01.210 "name": "BaseBdev2", 00:22:01.210 "uuid": "61595787-37e0-4a1d-b6eb-5a0d10cb31c0", 00:22:01.210 "is_configured": true, 00:22:01.210 "data_offset": 256, 00:22:01.210 "data_size": 7936 00:22:01.210 } 00:22:01.210 ] 00:22:01.210 } 00:22:01.210 } 00:22:01.210 }' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:01.210 BaseBdev2' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:01.210 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.211 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.211 [2024-11-20 07:18:43.470098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.470 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.471 "name": "Existed_Raid", 00:22:01.471 "uuid": "f872b8f7-a885-4f10-967a-99043580f301", 00:22:01.471 "strip_size_kb": 0, 00:22:01.471 "state": "online", 00:22:01.471 "raid_level": "raid1", 00:22:01.471 "superblock": true, 00:22:01.471 "num_base_bdevs": 2, 00:22:01.471 "num_base_bdevs_discovered": 1, 00:22:01.471 "num_base_bdevs_operational": 1, 00:22:01.471 "base_bdevs_list": [ 00:22:01.471 { 00:22:01.471 "name": null, 00:22:01.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.471 "is_configured": false, 00:22:01.471 "data_offset": 0, 00:22:01.471 "data_size": 7936 00:22:01.471 }, 00:22:01.471 { 00:22:01.471 "name": "BaseBdev2", 00:22:01.471 "uuid": "61595787-37e0-4a1d-b6eb-5a0d10cb31c0", 00:22:01.471 "is_configured": true, 00:22:01.471 "data_offset": 256, 00:22:01.471 "data_size": 7936 00:22:01.471 } 00:22:01.471 ] 00:22:01.471 }' 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.471 07:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 [2024-11-20 07:18:44.132988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:02.039 [2024-11-20 07:18:44.133113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.039 [2024-11-20 07:18:44.248593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.039 [2024-11-20 07:18:44.248655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.039 [2024-11-20 07:18:44.248670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:02.039 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:02.040 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88993 00:22:02.040 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88993 ']' 00:22:02.040 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88993 00:22:02.040 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88993 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.300 killing process with pid 88993 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88993' 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88993 00:22:02.300 [2024-11-20 07:18:44.333760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:02.300 07:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88993 00:22:02.300 [2024-11-20 07:18:44.354017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:03.680 07:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:03.680 00:22:03.680 real 0m5.509s 00:22:03.680 user 0m7.917s 00:22:03.680 sys 0m0.931s 00:22:03.680 ************************************ 00:22:03.680 END TEST raid_state_function_test_sb_md_interleaved 00:22:03.680 ************************************ 00:22:03.680 07:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.680 07:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.680 07:18:45 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:03.680 07:18:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:03.680 07:18:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.680 07:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.680 ************************************ 00:22:03.680 START TEST raid_superblock_test_md_interleaved 00:22:03.680 ************************************ 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89251 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89251 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89251 ']' 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.680 07:18:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.680 [2024-11-20 07:18:45.799573] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:03.680 [2024-11-20 07:18:45.799706] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89251 ] 00:22:03.940 [2024-11-20 07:18:45.964166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.940 [2024-11-20 07:18:46.100670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.199 [2024-11-20 07:18:46.335768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.200 [2024-11-20 07:18:46.335817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.768 malloc1 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.768 [2024-11-20 07:18:46.802244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:04.768 [2024-11-20 07:18:46.802360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.768 [2024-11-20 07:18:46.802420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:04.768 [2024-11-20 07:18:46.802463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.768 [2024-11-20 07:18:46.804375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.768 [2024-11-20 07:18:46.804453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:04.768 pt1 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.768 malloc2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.768 [2024-11-20 07:18:46.867623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:04.768 [2024-11-20 07:18:46.867701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.768 [2024-11-20 07:18:46.867728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:04.768 [2024-11-20 07:18:46.867739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.768 [2024-11-20 07:18:46.869904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.768 [2024-11-20 07:18:46.869947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:04.768 pt2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.768 [2024-11-20 07:18:46.879642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:04.768 [2024-11-20 07:18:46.881782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:04.768 [2024-11-20 07:18:46.882031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:04.768 [2024-11-20 07:18:46.882048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:04.768 [2024-11-20 07:18:46.882162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:04.768 [2024-11-20 07:18:46.882250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:04.768 [2024-11-20 07:18:46.882263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:04.768 [2024-11-20 07:18:46.882380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:04.768 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.769 "name": "raid_bdev1", 00:22:04.769 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:04.769 "strip_size_kb": 0, 00:22:04.769 "state": "online", 00:22:04.769 "raid_level": "raid1", 00:22:04.769 "superblock": true, 00:22:04.769 "num_base_bdevs": 2, 00:22:04.769 "num_base_bdevs_discovered": 2, 00:22:04.769 "num_base_bdevs_operational": 2, 00:22:04.769 "base_bdevs_list": [ 00:22:04.769 { 00:22:04.769 "name": "pt1", 00:22:04.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:04.769 "is_configured": true, 00:22:04.769 "data_offset": 256, 00:22:04.769 "data_size": 7936 00:22:04.769 }, 00:22:04.769 { 00:22:04.769 "name": "pt2", 00:22:04.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:04.769 "is_configured": true, 00:22:04.769 "data_offset": 256, 00:22:04.769 "data_size": 7936 00:22:04.769 } 00:22:04.769 ] 00:22:04.769 }' 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.769 07:18:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.338 [2024-11-20 07:18:47.351165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.338 "name": "raid_bdev1", 00:22:05.338 "aliases": [ 00:22:05.338 "d306faf8-41e8-4bc9-8642-663feddfd6d5" 00:22:05.338 ], 00:22:05.338 "product_name": "Raid Volume", 00:22:05.338 "block_size": 4128, 00:22:05.338 "num_blocks": 7936, 00:22:05.338 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:05.338 "md_size": 32, 00:22:05.338 "md_interleave": true, 00:22:05.338 "dif_type": 0, 00:22:05.338 "assigned_rate_limits": { 00:22:05.338 "rw_ios_per_sec": 0, 00:22:05.338 "rw_mbytes_per_sec": 0, 00:22:05.338 "r_mbytes_per_sec": 0, 00:22:05.338 "w_mbytes_per_sec": 0 00:22:05.338 }, 00:22:05.338 "claimed": false, 00:22:05.338 "zoned": false, 00:22:05.338 "supported_io_types": { 00:22:05.338 "read": true, 00:22:05.338 "write": true, 00:22:05.338 "unmap": false, 00:22:05.338 "flush": false, 00:22:05.338 "reset": true, 00:22:05.338 "nvme_admin": false, 00:22:05.338 "nvme_io": false, 00:22:05.338 "nvme_io_md": false, 00:22:05.338 "write_zeroes": true, 00:22:05.338 "zcopy": false, 00:22:05.338 "get_zone_info": false, 00:22:05.338 "zone_management": false, 00:22:05.338 "zone_append": false, 00:22:05.338 "compare": false, 00:22:05.338 "compare_and_write": false, 00:22:05.338 "abort": false, 00:22:05.338 "seek_hole": false, 00:22:05.338 "seek_data": false, 00:22:05.338 "copy": false, 00:22:05.338 "nvme_iov_md": false 00:22:05.338 }, 00:22:05.338 "memory_domains": [ 00:22:05.338 { 00:22:05.338 "dma_device_id": "system", 00:22:05.338 "dma_device_type": 1 00:22:05.338 }, 00:22:05.338 { 00:22:05.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.338 "dma_device_type": 2 00:22:05.338 }, 00:22:05.338 { 00:22:05.338 "dma_device_id": "system", 00:22:05.338 "dma_device_type": 1 00:22:05.338 }, 00:22:05.338 { 00:22:05.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.338 "dma_device_type": 2 00:22:05.338 } 00:22:05.338 ], 00:22:05.338 "driver_specific": { 00:22:05.338 "raid": { 00:22:05.338 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:05.338 "strip_size_kb": 0, 00:22:05.338 "state": "online", 00:22:05.338 "raid_level": "raid1", 00:22:05.338 "superblock": true, 00:22:05.338 "num_base_bdevs": 2, 00:22:05.338 "num_base_bdevs_discovered": 2, 00:22:05.338 "num_base_bdevs_operational": 2, 00:22:05.338 "base_bdevs_list": [ 00:22:05.338 { 00:22:05.338 "name": "pt1", 00:22:05.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:05.338 "is_configured": true, 00:22:05.338 "data_offset": 256, 00:22:05.338 "data_size": 7936 00:22:05.338 }, 00:22:05.338 { 00:22:05.338 "name": "pt2", 00:22:05.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.338 "is_configured": true, 00:22:05.338 "data_offset": 256, 00:22:05.338 "data_size": 7936 00:22:05.338 } 00:22:05.338 ] 00:22:05.338 } 00:22:05.338 } 00:22:05.338 }' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:05.338 pt2' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.338 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.338 [2024-11-20 07:18:47.582799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d306faf8-41e8-4bc9-8642-663feddfd6d5 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d306faf8-41e8-4bc9-8642-663feddfd6d5 ']' 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 [2024-11-20 07:18:47.614394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.599 [2024-11-20 07:18:47.614427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.599 [2024-11-20 07:18:47.614543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.599 [2024-11-20 07:18:47.614611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.599 [2024-11-20 07:18:47.614625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.599 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.599 [2024-11-20 07:18:47.750220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:05.600 [2024-11-20 07:18:47.752473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:05.600 [2024-11-20 07:18:47.752631] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:05.600 [2024-11-20 07:18:47.752753] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:05.600 [2024-11-20 07:18:47.752826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.600 [2024-11-20 07:18:47.752867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:05.600 request: 00:22:05.600 { 00:22:05.600 "name": "raid_bdev1", 00:22:05.600 "raid_level": "raid1", 00:22:05.600 "base_bdevs": [ 00:22:05.600 "malloc1", 00:22:05.600 "malloc2" 00:22:05.600 ], 00:22:05.600 "superblock": false, 00:22:05.600 "method": "bdev_raid_create", 00:22:05.600 "req_id": 1 00:22:05.600 } 00:22:05.600 Got JSON-RPC error response 00:22:05.600 response: 00:22:05.600 { 00:22:05.600 "code": -17, 00:22:05.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:05.600 } 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.600 [2024-11-20 07:18:47.818077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:05.600 [2024-11-20 07:18:47.818156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.600 [2024-11-20 07:18:47.818177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:05.600 [2024-11-20 07:18:47.818189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.600 [2024-11-20 07:18:47.820406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.600 [2024-11-20 07:18:47.820452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:05.600 [2024-11-20 07:18:47.820520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:05.600 [2024-11-20 07:18:47.820596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:05.600 pt1 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.600 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.860 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.860 "name": "raid_bdev1", 00:22:05.860 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:05.860 "strip_size_kb": 0, 00:22:05.860 "state": "configuring", 00:22:05.860 "raid_level": "raid1", 00:22:05.860 "superblock": true, 00:22:05.860 "num_base_bdevs": 2, 00:22:05.860 "num_base_bdevs_discovered": 1, 00:22:05.860 "num_base_bdevs_operational": 2, 00:22:05.860 "base_bdevs_list": [ 00:22:05.860 { 00:22:05.860 "name": "pt1", 00:22:05.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:05.860 "is_configured": true, 00:22:05.860 "data_offset": 256, 00:22:05.860 "data_size": 7936 00:22:05.860 }, 00:22:05.860 { 00:22:05.860 "name": null, 00:22:05.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.860 "is_configured": false, 00:22:05.860 "data_offset": 256, 00:22:05.860 "data_size": 7936 00:22:05.860 } 00:22:05.860 ] 00:22:05.860 }' 00:22:05.860 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.860 07:18:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.120 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.120 [2024-11-20 07:18:48.305292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:06.120 [2024-11-20 07:18:48.305399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.120 [2024-11-20 07:18:48.305428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:06.120 [2024-11-20 07:18:48.305441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.120 [2024-11-20 07:18:48.305643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.121 [2024-11-20 07:18:48.305659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:06.121 [2024-11-20 07:18:48.305720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:06.121 [2024-11-20 07:18:48.305748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:06.121 [2024-11-20 07:18:48.305845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:06.121 [2024-11-20 07:18:48.305858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:06.121 [2024-11-20 07:18:48.305938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:06.121 [2024-11-20 07:18:48.306019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:06.121 [2024-11-20 07:18:48.306029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:06.121 [2024-11-20 07:18:48.306101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.121 pt2 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.121 "name": "raid_bdev1", 00:22:06.121 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:06.121 "strip_size_kb": 0, 00:22:06.121 "state": "online", 00:22:06.121 "raid_level": "raid1", 00:22:06.121 "superblock": true, 00:22:06.121 "num_base_bdevs": 2, 00:22:06.121 "num_base_bdevs_discovered": 2, 00:22:06.121 "num_base_bdevs_operational": 2, 00:22:06.121 "base_bdevs_list": [ 00:22:06.121 { 00:22:06.121 "name": "pt1", 00:22:06.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:06.121 "is_configured": true, 00:22:06.121 "data_offset": 256, 00:22:06.121 "data_size": 7936 00:22:06.121 }, 00:22:06.121 { 00:22:06.121 "name": "pt2", 00:22:06.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:06.121 "is_configured": true, 00:22:06.121 "data_offset": 256, 00:22:06.121 "data_size": 7936 00:22:06.121 } 00:22:06.121 ] 00:22:06.121 }' 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.121 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.691 [2024-11-20 07:18:48.768744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.691 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:06.691 "name": "raid_bdev1", 00:22:06.691 "aliases": [ 00:22:06.692 "d306faf8-41e8-4bc9-8642-663feddfd6d5" 00:22:06.692 ], 00:22:06.692 "product_name": "Raid Volume", 00:22:06.692 "block_size": 4128, 00:22:06.692 "num_blocks": 7936, 00:22:06.692 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:06.692 "md_size": 32, 00:22:06.692 "md_interleave": true, 00:22:06.692 "dif_type": 0, 00:22:06.692 "assigned_rate_limits": { 00:22:06.692 "rw_ios_per_sec": 0, 00:22:06.692 "rw_mbytes_per_sec": 0, 00:22:06.692 "r_mbytes_per_sec": 0, 00:22:06.692 "w_mbytes_per_sec": 0 00:22:06.692 }, 00:22:06.692 "claimed": false, 00:22:06.692 "zoned": false, 00:22:06.692 "supported_io_types": { 00:22:06.692 "read": true, 00:22:06.692 "write": true, 00:22:06.692 "unmap": false, 00:22:06.692 "flush": false, 00:22:06.692 "reset": true, 00:22:06.692 "nvme_admin": false, 00:22:06.692 "nvme_io": false, 00:22:06.692 "nvme_io_md": false, 00:22:06.692 "write_zeroes": true, 00:22:06.692 "zcopy": false, 00:22:06.692 "get_zone_info": false, 00:22:06.692 "zone_management": false, 00:22:06.692 "zone_append": false, 00:22:06.692 "compare": false, 00:22:06.692 "compare_and_write": false, 00:22:06.692 "abort": false, 00:22:06.692 "seek_hole": false, 00:22:06.692 "seek_data": false, 00:22:06.692 "copy": false, 00:22:06.692 "nvme_iov_md": false 00:22:06.692 }, 00:22:06.692 "memory_domains": [ 00:22:06.692 { 00:22:06.692 "dma_device_id": "system", 00:22:06.692 "dma_device_type": 1 00:22:06.692 }, 00:22:06.692 { 00:22:06.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.692 "dma_device_type": 2 00:22:06.692 }, 00:22:06.692 { 00:22:06.692 "dma_device_id": "system", 00:22:06.692 "dma_device_type": 1 00:22:06.692 }, 00:22:06.692 { 00:22:06.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.692 "dma_device_type": 2 00:22:06.692 } 00:22:06.692 ], 00:22:06.692 "driver_specific": { 00:22:06.692 "raid": { 00:22:06.692 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:06.692 "strip_size_kb": 0, 00:22:06.692 "state": "online", 00:22:06.692 "raid_level": "raid1", 00:22:06.692 "superblock": true, 00:22:06.692 "num_base_bdevs": 2, 00:22:06.692 "num_base_bdevs_discovered": 2, 00:22:06.692 "num_base_bdevs_operational": 2, 00:22:06.692 "base_bdevs_list": [ 00:22:06.692 { 00:22:06.692 "name": "pt1", 00:22:06.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:06.692 "is_configured": true, 00:22:06.692 "data_offset": 256, 00:22:06.692 "data_size": 7936 00:22:06.692 }, 00:22:06.692 { 00:22:06.692 "name": "pt2", 00:22:06.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:06.692 "is_configured": true, 00:22:06.692 "data_offset": 256, 00:22:06.692 "data_size": 7936 00:22:06.692 } 00:22:06.692 ] 00:22:06.692 } 00:22:06.692 } 00:22:06.692 }' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:06.692 pt2' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.692 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.952 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.952 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:06.952 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:06.952 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:06.952 07:18:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.952 [2024-11-20 07:18:49.008415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d306faf8-41e8-4bc9-8642-663feddfd6d5 '!=' d306faf8-41e8-4bc9-8642-663feddfd6d5 ']' 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.952 [2024-11-20 07:18:49.056048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.952 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.953 "name": "raid_bdev1", 00:22:06.953 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:06.953 "strip_size_kb": 0, 00:22:06.953 "state": "online", 00:22:06.953 "raid_level": "raid1", 00:22:06.953 "superblock": true, 00:22:06.953 "num_base_bdevs": 2, 00:22:06.953 "num_base_bdevs_discovered": 1, 00:22:06.953 "num_base_bdevs_operational": 1, 00:22:06.953 "base_bdevs_list": [ 00:22:06.953 { 00:22:06.953 "name": null, 00:22:06.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.953 "is_configured": false, 00:22:06.953 "data_offset": 0, 00:22:06.953 "data_size": 7936 00:22:06.953 }, 00:22:06.953 { 00:22:06.953 "name": "pt2", 00:22:06.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:06.953 "is_configured": true, 00:22:06.953 "data_offset": 256, 00:22:06.953 "data_size": 7936 00:22:06.953 } 00:22:06.953 ] 00:22:06.953 }' 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.953 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.212 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:07.212 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.212 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.212 [2024-11-20 07:18:49.467326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.212 [2024-11-20 07:18:49.467432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.212 [2024-11-20 07:18:49.467548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.212 [2024-11-20 07:18:49.467624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.212 [2024-11-20 07:18:49.467714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:07.212 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.472 [2024-11-20 07:18:49.527230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:07.472 [2024-11-20 07:18:49.527303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.472 [2024-11-20 07:18:49.527324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:07.472 [2024-11-20 07:18:49.527350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.472 [2024-11-20 07:18:49.529639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.472 [2024-11-20 07:18:49.529682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:07.472 [2024-11-20 07:18:49.529745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:07.472 [2024-11-20 07:18:49.529799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.472 [2024-11-20 07:18:49.529870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:07.472 [2024-11-20 07:18:49.529883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:07.472 [2024-11-20 07:18:49.529981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:07.472 [2024-11-20 07:18:49.530051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:07.472 [2024-11-20 07:18:49.530065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:07.472 [2024-11-20 07:18:49.530140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.472 pt2 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.472 "name": "raid_bdev1", 00:22:07.472 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:07.472 "strip_size_kb": 0, 00:22:07.472 "state": "online", 00:22:07.472 "raid_level": "raid1", 00:22:07.472 "superblock": true, 00:22:07.472 "num_base_bdevs": 2, 00:22:07.472 "num_base_bdevs_discovered": 1, 00:22:07.472 "num_base_bdevs_operational": 1, 00:22:07.472 "base_bdevs_list": [ 00:22:07.472 { 00:22:07.472 "name": null, 00:22:07.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.472 "is_configured": false, 00:22:07.472 "data_offset": 256, 00:22:07.472 "data_size": 7936 00:22:07.472 }, 00:22:07.472 { 00:22:07.472 "name": "pt2", 00:22:07.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:07.472 "is_configured": true, 00:22:07.472 "data_offset": 256, 00:22:07.472 "data_size": 7936 00:22:07.472 } 00:22:07.472 ] 00:22:07.472 }' 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.472 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.732 [2024-11-20 07:18:49.966454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.732 [2024-11-20 07:18:49.966543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.732 [2024-11-20 07:18:49.966655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.732 [2024-11-20 07:18:49.966755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.732 [2024-11-20 07:18:49.966820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.732 07:18:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.992 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.992 [2024-11-20 07:18:50.026433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:07.992 [2024-11-20 07:18:50.026548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.992 [2024-11-20 07:18:50.026603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:07.992 [2024-11-20 07:18:50.026635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.992 [2024-11-20 07:18:50.028719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.992 [2024-11-20 07:18:50.028813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:07.992 [2024-11-20 07:18:50.028974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:07.992 [2024-11-20 07:18:50.029061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:07.992 [2024-11-20 07:18:50.029217] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:07.992 [2024-11-20 07:18:50.029276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.992 [2024-11-20 07:18:50.029365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:07.992 [2024-11-20 07:18:50.029490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.992 [2024-11-20 07:18:50.029608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:07.992 [2024-11-20 07:18:50.029650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:07.992 [2024-11-20 07:18:50.029759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:07.992 [2024-11-20 07:18:50.029865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:07.992 [2024-11-20 07:18:50.029912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:07.993 [2024-11-20 07:18:50.030046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.993 pt1 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.993 "name": "raid_bdev1", 00:22:07.993 "uuid": "d306faf8-41e8-4bc9-8642-663feddfd6d5", 00:22:07.993 "strip_size_kb": 0, 00:22:07.993 "state": "online", 00:22:07.993 "raid_level": "raid1", 00:22:07.993 "superblock": true, 00:22:07.993 "num_base_bdevs": 2, 00:22:07.993 "num_base_bdevs_discovered": 1, 00:22:07.993 "num_base_bdevs_operational": 1, 00:22:07.993 "base_bdevs_list": [ 00:22:07.993 { 00:22:07.993 "name": null, 00:22:07.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.993 "is_configured": false, 00:22:07.993 "data_offset": 256, 00:22:07.993 "data_size": 7936 00:22:07.993 }, 00:22:07.993 { 00:22:07.993 "name": "pt2", 00:22:07.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:07.993 "is_configured": true, 00:22:07.993 "data_offset": 256, 00:22:07.993 "data_size": 7936 00:22:07.993 } 00:22:07.993 ] 00:22:07.993 }' 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.993 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:08.252 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:08.253 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:08.253 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.253 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.253 [2024-11-20 07:18:50.513825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d306faf8-41e8-4bc9-8642-663feddfd6d5 '!=' d306faf8-41e8-4bc9-8642-663feddfd6d5 ']' 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89251 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89251 ']' 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89251 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89251 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.512 killing process with pid 89251 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89251' 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89251 00:22:08.512 [2024-11-20 07:18:50.599489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:08.512 [2024-11-20 07:18:50.599608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.512 07:18:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89251 00:22:08.512 [2024-11-20 07:18:50.599666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.512 [2024-11-20 07:18:50.599682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:08.771 [2024-11-20 07:18:50.816988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.170 07:18:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:10.170 00:22:10.170 real 0m6.284s 00:22:10.170 user 0m9.523s 00:22:10.170 sys 0m1.076s 00:22:10.170 07:18:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.170 07:18:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.170 ************************************ 00:22:10.170 END TEST raid_superblock_test_md_interleaved 00:22:10.170 ************************************ 00:22:10.170 07:18:52 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:10.170 07:18:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:10.170 07:18:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.170 07:18:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.170 ************************************ 00:22:10.170 START TEST raid_rebuild_test_sb_md_interleaved 00:22:10.170 ************************************ 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89575 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89575 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89575 ']' 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.170 07:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.170 [2024-11-20 07:18:52.162443] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:10.170 [2024-11-20 07:18:52.162737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89575 ] 00:22:10.170 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:10.170 Zero copy mechanism will not be used. 00:22:10.170 [2024-11-20 07:18:52.328985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.430 [2024-11-20 07:18:52.454716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.430 [2024-11-20 07:18:52.680134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.430 [2024-11-20 07:18:52.680228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 BaseBdev1_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 [2024-11-20 07:18:53.107662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:10.997 [2024-11-20 07:18:53.107818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.997 [2024-11-20 07:18:53.107874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:10.997 [2024-11-20 07:18:53.107937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.997 [2024-11-20 07:18:53.110196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.997 [2024-11-20 07:18:53.110279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:10.997 BaseBdev1 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 BaseBdev2_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 [2024-11-20 07:18:53.167399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:10.997 [2024-11-20 07:18:53.167469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.997 [2024-11-20 07:18:53.167493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:10.997 [2024-11-20 07:18:53.167506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.997 [2024-11-20 07:18:53.169520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.997 [2024-11-20 07:18:53.169630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:10.997 BaseBdev2 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 spare_malloc 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 spare_delay 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 [2024-11-20 07:18:53.248309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:10.997 [2024-11-20 07:18:53.248421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.997 [2024-11-20 07:18:53.248453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:10.997 [2024-11-20 07:18:53.248466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.997 [2024-11-20 07:18:53.250602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.997 [2024-11-20 07:18:53.250650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:10.997 spare 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.997 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.997 [2024-11-20 07:18:53.260349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.258 [2024-11-20 07:18:53.262636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.258 [2024-11-20 07:18:53.262873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:11.258 [2024-11-20 07:18:53.262889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:11.258 [2024-11-20 07:18:53.262998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:11.258 [2024-11-20 07:18:53.263095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:11.258 [2024-11-20 07:18:53.263104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:11.258 [2024-11-20 07:18:53.263197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.258 "name": "raid_bdev1", 00:22:11.258 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:11.258 "strip_size_kb": 0, 00:22:11.258 "state": "online", 00:22:11.258 "raid_level": "raid1", 00:22:11.258 "superblock": true, 00:22:11.258 "num_base_bdevs": 2, 00:22:11.258 "num_base_bdevs_discovered": 2, 00:22:11.258 "num_base_bdevs_operational": 2, 00:22:11.258 "base_bdevs_list": [ 00:22:11.258 { 00:22:11.258 "name": "BaseBdev1", 00:22:11.258 "uuid": "2c225410-b1d2-521a-9b54-abce062e1503", 00:22:11.258 "is_configured": true, 00:22:11.258 "data_offset": 256, 00:22:11.258 "data_size": 7936 00:22:11.258 }, 00:22:11.258 { 00:22:11.258 "name": "BaseBdev2", 00:22:11.258 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:11.258 "is_configured": true, 00:22:11.258 "data_offset": 256, 00:22:11.258 "data_size": 7936 00:22:11.258 } 00:22:11.258 ] 00:22:11.258 }' 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.258 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 [2024-11-20 07:18:53.703862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:11.518 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.778 [2024-11-20 07:18:53.799375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.778 "name": "raid_bdev1", 00:22:11.778 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:11.778 "strip_size_kb": 0, 00:22:11.778 "state": "online", 00:22:11.778 "raid_level": "raid1", 00:22:11.778 "superblock": true, 00:22:11.778 "num_base_bdevs": 2, 00:22:11.778 "num_base_bdevs_discovered": 1, 00:22:11.778 "num_base_bdevs_operational": 1, 00:22:11.778 "base_bdevs_list": [ 00:22:11.778 { 00:22:11.778 "name": null, 00:22:11.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.778 "is_configured": false, 00:22:11.778 "data_offset": 0, 00:22:11.778 "data_size": 7936 00:22:11.778 }, 00:22:11.778 { 00:22:11.778 "name": "BaseBdev2", 00:22:11.778 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:11.778 "is_configured": true, 00:22:11.778 "data_offset": 256, 00:22:11.778 "data_size": 7936 00:22:11.778 } 00:22:11.778 ] 00:22:11.778 }' 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.778 07:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.038 07:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:12.038 07:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.038 07:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.038 [2024-11-20 07:18:54.294546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:12.297 [2024-11-20 07:18:54.314653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:12.297 07:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.297 07:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:12.297 [2024-11-20 07:18:54.316874] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.237 "name": "raid_bdev1", 00:22:13.237 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:13.237 "strip_size_kb": 0, 00:22:13.237 "state": "online", 00:22:13.237 "raid_level": "raid1", 00:22:13.237 "superblock": true, 00:22:13.237 "num_base_bdevs": 2, 00:22:13.237 "num_base_bdevs_discovered": 2, 00:22:13.237 "num_base_bdevs_operational": 2, 00:22:13.237 "process": { 00:22:13.237 "type": "rebuild", 00:22:13.237 "target": "spare", 00:22:13.237 "progress": { 00:22:13.237 "blocks": 2560, 00:22:13.237 "percent": 32 00:22:13.237 } 00:22:13.237 }, 00:22:13.237 "base_bdevs_list": [ 00:22:13.237 { 00:22:13.237 "name": "spare", 00:22:13.237 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:13.237 "is_configured": true, 00:22:13.237 "data_offset": 256, 00:22:13.237 "data_size": 7936 00:22:13.237 }, 00:22:13.237 { 00:22:13.237 "name": "BaseBdev2", 00:22:13.237 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:13.237 "is_configured": true, 00:22:13.237 "data_offset": 256, 00:22:13.237 "data_size": 7936 00:22:13.237 } 00:22:13.237 ] 00:22:13.237 }' 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.237 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.237 [2024-11-20 07:18:55.471757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.497 [2024-11-20 07:18:55.522817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:13.497 [2024-11-20 07:18:55.522920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.497 [2024-11-20 07:18:55.522937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.497 [2024-11-20 07:18:55.522950] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.497 "name": "raid_bdev1", 00:22:13.497 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:13.497 "strip_size_kb": 0, 00:22:13.497 "state": "online", 00:22:13.497 "raid_level": "raid1", 00:22:13.497 "superblock": true, 00:22:13.497 "num_base_bdevs": 2, 00:22:13.497 "num_base_bdevs_discovered": 1, 00:22:13.497 "num_base_bdevs_operational": 1, 00:22:13.497 "base_bdevs_list": [ 00:22:13.497 { 00:22:13.497 "name": null, 00:22:13.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.497 "is_configured": false, 00:22:13.497 "data_offset": 0, 00:22:13.497 "data_size": 7936 00:22:13.497 }, 00:22:13.497 { 00:22:13.497 "name": "BaseBdev2", 00:22:13.497 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:13.497 "is_configured": true, 00:22:13.497 "data_offset": 256, 00:22:13.497 "data_size": 7936 00:22:13.497 } 00:22:13.497 ] 00:22:13.497 }' 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.497 07:18:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.761 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.021 "name": "raid_bdev1", 00:22:14.021 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:14.021 "strip_size_kb": 0, 00:22:14.021 "state": "online", 00:22:14.021 "raid_level": "raid1", 00:22:14.021 "superblock": true, 00:22:14.021 "num_base_bdevs": 2, 00:22:14.021 "num_base_bdevs_discovered": 1, 00:22:14.021 "num_base_bdevs_operational": 1, 00:22:14.021 "base_bdevs_list": [ 00:22:14.021 { 00:22:14.021 "name": null, 00:22:14.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.021 "is_configured": false, 00:22:14.021 "data_offset": 0, 00:22:14.021 "data_size": 7936 00:22:14.021 }, 00:22:14.021 { 00:22:14.021 "name": "BaseBdev2", 00:22:14.021 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:14.021 "is_configured": true, 00:22:14.021 "data_offset": 256, 00:22:14.021 "data_size": 7936 00:22:14.021 } 00:22:14.021 ] 00:22:14.021 }' 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.021 [2024-11-20 07:18:56.152845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:14.021 [2024-11-20 07:18:56.170417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.021 07:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:14.021 [2024-11-20 07:18:56.172538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.958 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.217 "name": "raid_bdev1", 00:22:15.217 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:15.217 "strip_size_kb": 0, 00:22:15.217 "state": "online", 00:22:15.217 "raid_level": "raid1", 00:22:15.217 "superblock": true, 00:22:15.217 "num_base_bdevs": 2, 00:22:15.217 "num_base_bdevs_discovered": 2, 00:22:15.217 "num_base_bdevs_operational": 2, 00:22:15.217 "process": { 00:22:15.217 "type": "rebuild", 00:22:15.217 "target": "spare", 00:22:15.217 "progress": { 00:22:15.217 "blocks": 2560, 00:22:15.217 "percent": 32 00:22:15.217 } 00:22:15.217 }, 00:22:15.217 "base_bdevs_list": [ 00:22:15.217 { 00:22:15.217 "name": "spare", 00:22:15.217 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:15.217 "is_configured": true, 00:22:15.217 "data_offset": 256, 00:22:15.217 "data_size": 7936 00:22:15.217 }, 00:22:15.217 { 00:22:15.217 "name": "BaseBdev2", 00:22:15.217 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:15.217 "is_configured": true, 00:22:15.217 "data_offset": 256, 00:22:15.217 "data_size": 7936 00:22:15.217 } 00:22:15.217 ] 00:22:15.217 }' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:15.217 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=773 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.217 "name": "raid_bdev1", 00:22:15.217 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:15.217 "strip_size_kb": 0, 00:22:15.217 "state": "online", 00:22:15.217 "raid_level": "raid1", 00:22:15.217 "superblock": true, 00:22:15.217 "num_base_bdevs": 2, 00:22:15.217 "num_base_bdevs_discovered": 2, 00:22:15.217 "num_base_bdevs_operational": 2, 00:22:15.217 "process": { 00:22:15.217 "type": "rebuild", 00:22:15.217 "target": "spare", 00:22:15.217 "progress": { 00:22:15.217 "blocks": 2816, 00:22:15.217 "percent": 35 00:22:15.217 } 00:22:15.217 }, 00:22:15.217 "base_bdevs_list": [ 00:22:15.217 { 00:22:15.217 "name": "spare", 00:22:15.217 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:15.217 "is_configured": true, 00:22:15.217 "data_offset": 256, 00:22:15.217 "data_size": 7936 00:22:15.217 }, 00:22:15.217 { 00:22:15.217 "name": "BaseBdev2", 00:22:15.217 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:15.217 "is_configured": true, 00:22:15.217 "data_offset": 256, 00:22:15.217 "data_size": 7936 00:22:15.217 } 00:22:15.217 ] 00:22:15.217 }' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.217 07:18:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.596 "name": "raid_bdev1", 00:22:16.596 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:16.596 "strip_size_kb": 0, 00:22:16.596 "state": "online", 00:22:16.596 "raid_level": "raid1", 00:22:16.596 "superblock": true, 00:22:16.596 "num_base_bdevs": 2, 00:22:16.596 "num_base_bdevs_discovered": 2, 00:22:16.596 "num_base_bdevs_operational": 2, 00:22:16.596 "process": { 00:22:16.596 "type": "rebuild", 00:22:16.596 "target": "spare", 00:22:16.596 "progress": { 00:22:16.596 "blocks": 5632, 00:22:16.596 "percent": 70 00:22:16.596 } 00:22:16.596 }, 00:22:16.596 "base_bdevs_list": [ 00:22:16.596 { 00:22:16.596 "name": "spare", 00:22:16.596 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:16.596 "is_configured": true, 00:22:16.596 "data_offset": 256, 00:22:16.596 "data_size": 7936 00:22:16.596 }, 00:22:16.596 { 00:22:16.596 "name": "BaseBdev2", 00:22:16.596 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:16.596 "is_configured": true, 00:22:16.596 "data_offset": 256, 00:22:16.596 "data_size": 7936 00:22:16.596 } 00:22:16.596 ] 00:22:16.596 }' 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.596 07:18:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:17.166 [2024-11-20 07:18:59.287592] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:17.166 [2024-11-20 07:18:59.287798] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:17.166 [2024-11-20 07:18:59.287995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.426 "name": "raid_bdev1", 00:22:17.426 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:17.426 "strip_size_kb": 0, 00:22:17.426 "state": "online", 00:22:17.426 "raid_level": "raid1", 00:22:17.426 "superblock": true, 00:22:17.426 "num_base_bdevs": 2, 00:22:17.426 "num_base_bdevs_discovered": 2, 00:22:17.426 "num_base_bdevs_operational": 2, 00:22:17.426 "base_bdevs_list": [ 00:22:17.426 { 00:22:17.426 "name": "spare", 00:22:17.426 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:17.426 "is_configured": true, 00:22:17.426 "data_offset": 256, 00:22:17.426 "data_size": 7936 00:22:17.426 }, 00:22:17.426 { 00:22:17.426 "name": "BaseBdev2", 00:22:17.426 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:17.426 "is_configured": true, 00:22:17.426 "data_offset": 256, 00:22:17.426 "data_size": 7936 00:22:17.426 } 00:22:17.426 ] 00:22:17.426 }' 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:17.426 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.686 "name": "raid_bdev1", 00:22:17.686 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:17.686 "strip_size_kb": 0, 00:22:17.686 "state": "online", 00:22:17.686 "raid_level": "raid1", 00:22:17.686 "superblock": true, 00:22:17.686 "num_base_bdevs": 2, 00:22:17.686 "num_base_bdevs_discovered": 2, 00:22:17.686 "num_base_bdevs_operational": 2, 00:22:17.686 "base_bdevs_list": [ 00:22:17.686 { 00:22:17.686 "name": "spare", 00:22:17.686 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:17.686 "is_configured": true, 00:22:17.686 "data_offset": 256, 00:22:17.686 "data_size": 7936 00:22:17.686 }, 00:22:17.686 { 00:22:17.686 "name": "BaseBdev2", 00:22:17.686 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:17.686 "is_configured": true, 00:22:17.686 "data_offset": 256, 00:22:17.686 "data_size": 7936 00:22:17.686 } 00:22:17.686 ] 00:22:17.686 }' 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.686 "name": "raid_bdev1", 00:22:17.686 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:17.686 "strip_size_kb": 0, 00:22:17.686 "state": "online", 00:22:17.686 "raid_level": "raid1", 00:22:17.686 "superblock": true, 00:22:17.686 "num_base_bdevs": 2, 00:22:17.686 "num_base_bdevs_discovered": 2, 00:22:17.686 "num_base_bdevs_operational": 2, 00:22:17.686 "base_bdevs_list": [ 00:22:17.686 { 00:22:17.686 "name": "spare", 00:22:17.686 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:17.686 "is_configured": true, 00:22:17.686 "data_offset": 256, 00:22:17.686 "data_size": 7936 00:22:17.686 }, 00:22:17.686 { 00:22:17.686 "name": "BaseBdev2", 00:22:17.686 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:17.686 "is_configured": true, 00:22:17.686 "data_offset": 256, 00:22:17.686 "data_size": 7936 00:22:17.686 } 00:22:17.686 ] 00:22:17.686 }' 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.686 07:18:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.256 [2024-11-20 07:19:00.300718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.256 [2024-11-20 07:19:00.300846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.256 [2024-11-20 07:19:00.300991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.256 [2024-11-20 07:19:00.301118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.256 [2024-11-20 07:19:00.301180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.256 [2024-11-20 07:19:00.360612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:18.256 [2024-11-20 07:19:00.360688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.256 [2024-11-20 07:19:00.360716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:18.256 [2024-11-20 07:19:00.360727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.256 [2024-11-20 07:19:00.363018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.256 [2024-11-20 07:19:00.363120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:18.256 [2024-11-20 07:19:00.363199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:18.256 [2024-11-20 07:19:00.363283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.256 [2024-11-20 07:19:00.363428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.256 spare 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:18.256 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.257 [2024-11-20 07:19:00.463354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:18.257 [2024-11-20 07:19:00.463496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:18.257 [2024-11-20 07:19:00.463682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:18.257 [2024-11-20 07:19:00.463810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:18.257 [2024-11-20 07:19:00.463823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:18.257 [2024-11-20 07:19:00.463940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.257 "name": "raid_bdev1", 00:22:18.257 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:18.257 "strip_size_kb": 0, 00:22:18.257 "state": "online", 00:22:18.257 "raid_level": "raid1", 00:22:18.257 "superblock": true, 00:22:18.257 "num_base_bdevs": 2, 00:22:18.257 "num_base_bdevs_discovered": 2, 00:22:18.257 "num_base_bdevs_operational": 2, 00:22:18.257 "base_bdevs_list": [ 00:22:18.257 { 00:22:18.257 "name": "spare", 00:22:18.257 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:18.257 "is_configured": true, 00:22:18.257 "data_offset": 256, 00:22:18.257 "data_size": 7936 00:22:18.257 }, 00:22:18.257 { 00:22:18.257 "name": "BaseBdev2", 00:22:18.257 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:18.257 "is_configured": true, 00:22:18.257 "data_offset": 256, 00:22:18.257 "data_size": 7936 00:22:18.257 } 00:22:18.257 ] 00:22:18.257 }' 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.257 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.838 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.838 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.838 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:18.838 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:18.838 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.839 "name": "raid_bdev1", 00:22:18.839 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:18.839 "strip_size_kb": 0, 00:22:18.839 "state": "online", 00:22:18.839 "raid_level": "raid1", 00:22:18.839 "superblock": true, 00:22:18.839 "num_base_bdevs": 2, 00:22:18.839 "num_base_bdevs_discovered": 2, 00:22:18.839 "num_base_bdevs_operational": 2, 00:22:18.839 "base_bdevs_list": [ 00:22:18.839 { 00:22:18.839 "name": "spare", 00:22:18.839 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:18.839 "is_configured": true, 00:22:18.839 "data_offset": 256, 00:22:18.839 "data_size": 7936 00:22:18.839 }, 00:22:18.839 { 00:22:18.839 "name": "BaseBdev2", 00:22:18.839 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:18.839 "is_configured": true, 00:22:18.839 "data_offset": 256, 00:22:18.839 "data_size": 7936 00:22:18.839 } 00:22:18.839 ] 00:22:18.839 }' 00:22:18.839 07:19:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.839 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.098 [2024-11-20 07:19:01.131446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.098 "name": "raid_bdev1", 00:22:19.098 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:19.098 "strip_size_kb": 0, 00:22:19.098 "state": "online", 00:22:19.098 "raid_level": "raid1", 00:22:19.098 "superblock": true, 00:22:19.098 "num_base_bdevs": 2, 00:22:19.098 "num_base_bdevs_discovered": 1, 00:22:19.098 "num_base_bdevs_operational": 1, 00:22:19.098 "base_bdevs_list": [ 00:22:19.098 { 00:22:19.098 "name": null, 00:22:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.098 "is_configured": false, 00:22:19.098 "data_offset": 0, 00:22:19.098 "data_size": 7936 00:22:19.098 }, 00:22:19.098 { 00:22:19.098 "name": "BaseBdev2", 00:22:19.098 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:19.098 "is_configured": true, 00:22:19.098 "data_offset": 256, 00:22:19.098 "data_size": 7936 00:22:19.098 } 00:22:19.098 ] 00:22:19.098 }' 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.098 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.358 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:19.358 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.358 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.358 [2024-11-20 07:19:01.570718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.358 [2024-11-20 07:19:01.570994] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:19.358 [2024-11-20 07:19:01.571064] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:19.358 [2024-11-20 07:19:01.571154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.358 [2024-11-20 07:19:01.589565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:19.358 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.358 07:19:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:19.358 [2024-11-20 07:19:01.591855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.738 "name": "raid_bdev1", 00:22:20.738 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:20.738 "strip_size_kb": 0, 00:22:20.738 "state": "online", 00:22:20.738 "raid_level": "raid1", 00:22:20.738 "superblock": true, 00:22:20.738 "num_base_bdevs": 2, 00:22:20.738 "num_base_bdevs_discovered": 2, 00:22:20.738 "num_base_bdevs_operational": 2, 00:22:20.738 "process": { 00:22:20.738 "type": "rebuild", 00:22:20.738 "target": "spare", 00:22:20.738 "progress": { 00:22:20.738 "blocks": 2560, 00:22:20.738 "percent": 32 00:22:20.738 } 00:22:20.738 }, 00:22:20.738 "base_bdevs_list": [ 00:22:20.738 { 00:22:20.738 "name": "spare", 00:22:20.738 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:20.738 "is_configured": true, 00:22:20.738 "data_offset": 256, 00:22:20.738 "data_size": 7936 00:22:20.738 }, 00:22:20.738 { 00:22:20.738 "name": "BaseBdev2", 00:22:20.738 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:20.738 "is_configured": true, 00:22:20.738 "data_offset": 256, 00:22:20.738 "data_size": 7936 00:22:20.738 } 00:22:20.738 ] 00:22:20.738 }' 00:22:20.738 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.739 [2024-11-20 07:19:02.742997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.739 [2024-11-20 07:19:02.797899] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:20.739 [2024-11-20 07:19:02.797996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.739 [2024-11-20 07:19:02.798014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.739 [2024-11-20 07:19:02.798025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.739 "name": "raid_bdev1", 00:22:20.739 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:20.739 "strip_size_kb": 0, 00:22:20.739 "state": "online", 00:22:20.739 "raid_level": "raid1", 00:22:20.739 "superblock": true, 00:22:20.739 "num_base_bdevs": 2, 00:22:20.739 "num_base_bdevs_discovered": 1, 00:22:20.739 "num_base_bdevs_operational": 1, 00:22:20.739 "base_bdevs_list": [ 00:22:20.739 { 00:22:20.739 "name": null, 00:22:20.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.739 "is_configured": false, 00:22:20.739 "data_offset": 0, 00:22:20.739 "data_size": 7936 00:22:20.739 }, 00:22:20.739 { 00:22:20.739 "name": "BaseBdev2", 00:22:20.739 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:20.739 "is_configured": true, 00:22:20.739 "data_offset": 256, 00:22:20.739 "data_size": 7936 00:22:20.739 } 00:22:20.739 ] 00:22:20.739 }' 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.739 07:19:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.307 07:19:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:21.307 07:19:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.307 07:19:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.307 [2024-11-20 07:19:03.299203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:21.307 [2024-11-20 07:19:03.299280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.307 [2024-11-20 07:19:03.299309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:21.307 [2024-11-20 07:19:03.299322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.307 [2024-11-20 07:19:03.299550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.307 [2024-11-20 07:19:03.299570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:21.307 [2024-11-20 07:19:03.299635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:21.307 [2024-11-20 07:19:03.299655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:21.307 [2024-11-20 07:19:03.299665] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:21.307 [2024-11-20 07:19:03.299695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.307 [2024-11-20 07:19:03.318242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:21.307 spare 00:22:21.307 07:19:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.307 07:19:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:21.307 [2024-11-20 07:19:03.320273] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.245 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.245 "name": "raid_bdev1", 00:22:22.245 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:22.245 "strip_size_kb": 0, 00:22:22.245 "state": "online", 00:22:22.245 "raid_level": "raid1", 00:22:22.245 "superblock": true, 00:22:22.245 "num_base_bdevs": 2, 00:22:22.245 "num_base_bdevs_discovered": 2, 00:22:22.245 "num_base_bdevs_operational": 2, 00:22:22.245 "process": { 00:22:22.245 "type": "rebuild", 00:22:22.246 "target": "spare", 00:22:22.246 "progress": { 00:22:22.246 "blocks": 2560, 00:22:22.246 "percent": 32 00:22:22.246 } 00:22:22.246 }, 00:22:22.246 "base_bdevs_list": [ 00:22:22.246 { 00:22:22.246 "name": "spare", 00:22:22.246 "uuid": "f5a81fc6-edec-5ab9-b6e7-5caede02a463", 00:22:22.246 "is_configured": true, 00:22:22.246 "data_offset": 256, 00:22:22.246 "data_size": 7936 00:22:22.246 }, 00:22:22.246 { 00:22:22.246 "name": "BaseBdev2", 00:22:22.246 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:22.246 "is_configured": true, 00:22:22.246 "data_offset": 256, 00:22:22.246 "data_size": 7936 00:22:22.246 } 00:22:22.246 ] 00:22:22.246 }' 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.246 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.246 [2024-11-20 07:19:04.483773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.505 [2024-11-20 07:19:04.526145] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:22.505 [2024-11-20 07:19:04.526243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.505 [2024-11-20 07:19:04.526265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.505 [2024-11-20 07:19:04.526274] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.505 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.505 "name": "raid_bdev1", 00:22:22.505 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:22.505 "strip_size_kb": 0, 00:22:22.505 "state": "online", 00:22:22.505 "raid_level": "raid1", 00:22:22.505 "superblock": true, 00:22:22.505 "num_base_bdevs": 2, 00:22:22.505 "num_base_bdevs_discovered": 1, 00:22:22.505 "num_base_bdevs_operational": 1, 00:22:22.505 "base_bdevs_list": [ 00:22:22.505 { 00:22:22.505 "name": null, 00:22:22.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.505 "is_configured": false, 00:22:22.505 "data_offset": 0, 00:22:22.505 "data_size": 7936 00:22:22.505 }, 00:22:22.505 { 00:22:22.505 "name": "BaseBdev2", 00:22:22.505 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:22.506 "is_configured": true, 00:22:22.506 "data_offset": 256, 00:22:22.506 "data_size": 7936 00:22:22.506 } 00:22:22.506 ] 00:22:22.506 }' 00:22:22.506 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.506 07:19:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.072 "name": "raid_bdev1", 00:22:23.072 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:23.072 "strip_size_kb": 0, 00:22:23.072 "state": "online", 00:22:23.072 "raid_level": "raid1", 00:22:23.072 "superblock": true, 00:22:23.072 "num_base_bdevs": 2, 00:22:23.072 "num_base_bdevs_discovered": 1, 00:22:23.072 "num_base_bdevs_operational": 1, 00:22:23.072 "base_bdevs_list": [ 00:22:23.072 { 00:22:23.072 "name": null, 00:22:23.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.072 "is_configured": false, 00:22:23.072 "data_offset": 0, 00:22:23.072 "data_size": 7936 00:22:23.072 }, 00:22:23.072 { 00:22:23.072 "name": "BaseBdev2", 00:22:23.072 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:23.072 "is_configured": true, 00:22:23.072 "data_offset": 256, 00:22:23.072 "data_size": 7936 00:22:23.072 } 00:22:23.072 ] 00:22:23.072 }' 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 [2024-11-20 07:19:05.197721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:23.072 [2024-11-20 07:19:05.197897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.072 [2024-11-20 07:19:05.197937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:23.072 [2024-11-20 07:19:05.197950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.072 [2024-11-20 07:19:05.198158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.072 [2024-11-20 07:19:05.198174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:23.072 [2024-11-20 07:19:05.198247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:23.072 [2024-11-20 07:19:05.198263] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:23.072 [2024-11-20 07:19:05.198276] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:23.072 [2024-11-20 07:19:05.198289] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:23.072 BaseBdev1 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.072 07:19:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.008 "name": "raid_bdev1", 00:22:24.008 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:24.008 "strip_size_kb": 0, 00:22:24.008 "state": "online", 00:22:24.008 "raid_level": "raid1", 00:22:24.008 "superblock": true, 00:22:24.008 "num_base_bdevs": 2, 00:22:24.008 "num_base_bdevs_discovered": 1, 00:22:24.008 "num_base_bdevs_operational": 1, 00:22:24.008 "base_bdevs_list": [ 00:22:24.008 { 00:22:24.008 "name": null, 00:22:24.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.008 "is_configured": false, 00:22:24.008 "data_offset": 0, 00:22:24.008 "data_size": 7936 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "name": "BaseBdev2", 00:22:24.008 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:24.008 "is_configured": true, 00:22:24.008 "data_offset": 256, 00:22:24.008 "data_size": 7936 00:22:24.008 } 00:22:24.008 ] 00:22:24.008 }' 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.008 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.577 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.577 "name": "raid_bdev1", 00:22:24.577 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:24.577 "strip_size_kb": 0, 00:22:24.577 "state": "online", 00:22:24.577 "raid_level": "raid1", 00:22:24.577 "superblock": true, 00:22:24.577 "num_base_bdevs": 2, 00:22:24.578 "num_base_bdevs_discovered": 1, 00:22:24.578 "num_base_bdevs_operational": 1, 00:22:24.578 "base_bdevs_list": [ 00:22:24.578 { 00:22:24.578 "name": null, 00:22:24.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.578 "is_configured": false, 00:22:24.578 "data_offset": 0, 00:22:24.578 "data_size": 7936 00:22:24.578 }, 00:22:24.578 { 00:22:24.578 "name": "BaseBdev2", 00:22:24.578 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:24.578 "is_configured": true, 00:22:24.578 "data_offset": 256, 00:22:24.578 "data_size": 7936 00:22:24.578 } 00:22:24.578 ] 00:22:24.578 }' 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.578 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.837 [2024-11-20 07:19:06.843228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.837 [2024-11-20 07:19:06.843458] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:24.837 [2024-11-20 07:19:06.843480] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:24.837 request: 00:22:24.837 { 00:22:24.837 "base_bdev": "BaseBdev1", 00:22:24.837 "raid_bdev": "raid_bdev1", 00:22:24.837 "method": "bdev_raid_add_base_bdev", 00:22:24.837 "req_id": 1 00:22:24.837 } 00:22:24.837 Got JSON-RPC error response 00:22:24.837 response: 00:22:24.837 { 00:22:24.837 "code": -22, 00:22:24.837 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:24.837 } 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.837 07:19:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.775 "name": "raid_bdev1", 00:22:25.775 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:25.775 "strip_size_kb": 0, 00:22:25.775 "state": "online", 00:22:25.775 "raid_level": "raid1", 00:22:25.775 "superblock": true, 00:22:25.775 "num_base_bdevs": 2, 00:22:25.775 "num_base_bdevs_discovered": 1, 00:22:25.775 "num_base_bdevs_operational": 1, 00:22:25.775 "base_bdevs_list": [ 00:22:25.775 { 00:22:25.775 "name": null, 00:22:25.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.775 "is_configured": false, 00:22:25.775 "data_offset": 0, 00:22:25.775 "data_size": 7936 00:22:25.775 }, 00:22:25.775 { 00:22:25.775 "name": "BaseBdev2", 00:22:25.775 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:25.775 "is_configured": true, 00:22:25.775 "data_offset": 256, 00:22:25.775 "data_size": 7936 00:22:25.775 } 00:22:25.775 ] 00:22:25.775 }' 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.775 07:19:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.344 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.345 "name": "raid_bdev1", 00:22:26.345 "uuid": "030cbb86-c4c7-4a2c-a4f5-0012530b60b1", 00:22:26.345 "strip_size_kb": 0, 00:22:26.345 "state": "online", 00:22:26.345 "raid_level": "raid1", 00:22:26.345 "superblock": true, 00:22:26.345 "num_base_bdevs": 2, 00:22:26.345 "num_base_bdevs_discovered": 1, 00:22:26.345 "num_base_bdevs_operational": 1, 00:22:26.345 "base_bdevs_list": [ 00:22:26.345 { 00:22:26.345 "name": null, 00:22:26.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.345 "is_configured": false, 00:22:26.345 "data_offset": 0, 00:22:26.345 "data_size": 7936 00:22:26.345 }, 00:22:26.345 { 00:22:26.345 "name": "BaseBdev2", 00:22:26.345 "uuid": "19886699-df5f-5426-b4d7-1d1641503fb6", 00:22:26.345 "is_configured": true, 00:22:26.345 "data_offset": 256, 00:22:26.345 "data_size": 7936 00:22:26.345 } 00:22:26.345 ] 00:22:26.345 }' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89575 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89575 ']' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89575 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89575 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89575' 00:22:26.345 killing process with pid 89575 00:22:26.345 Received shutdown signal, test time was about 60.000000 seconds 00:22:26.345 00:22:26.345 Latency(us) 00:22:26.345 [2024-11-20T07:19:08.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.345 [2024-11-20T07:19:08.610Z] =================================================================================================================== 00:22:26.345 [2024-11-20T07:19:08.610Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.345 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89575 00:22:26.345 [2024-11-20 07:19:08.533202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.345 [2024-11-20 07:19:08.533362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.345 [2024-11-20 07:19:08.533422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.345 [2024-11-20 07:19:08.533437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, sta 07:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89575 00:22:26.345 te offline 00:22:26.913 [2024-11-20 07:19:08.890700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.313 07:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:28.313 ************************************ 00:22:28.313 END TEST raid_rebuild_test_sb_md_interleaved 00:22:28.313 ************************************ 00:22:28.313 00:22:28.313 real 0m18.090s 00:22:28.313 user 0m23.813s 00:22:28.313 sys 0m1.670s 00:22:28.313 07:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.313 07:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:28.313 07:19:10 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:28.313 07:19:10 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:28.313 07:19:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89575 ']' 00:22:28.313 07:19:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89575 00:22:28.313 07:19:10 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:28.313 00:22:28.313 real 12m35.450s 00:22:28.313 user 16m55.712s 00:22:28.313 sys 1m59.062s 00:22:28.313 07:19:10 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.313 07:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.313 ************************************ 00:22:28.313 END TEST bdev_raid 00:22:28.313 ************************************ 00:22:28.313 07:19:10 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:28.313 07:19:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.313 07:19:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.313 07:19:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.313 ************************************ 00:22:28.313 START TEST spdkcli_raid 00:22:28.313 ************************************ 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:28.313 * Looking for test storage... 00:22:28.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.313 07:19:10 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.313 --rc genhtml_branch_coverage=1 00:22:28.313 --rc genhtml_function_coverage=1 00:22:28.313 --rc genhtml_legend=1 00:22:28.313 --rc geninfo_all_blocks=1 00:22:28.313 --rc geninfo_unexecuted_blocks=1 00:22:28.313 00:22:28.313 ' 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.313 --rc genhtml_branch_coverage=1 00:22:28.313 --rc genhtml_function_coverage=1 00:22:28.313 --rc genhtml_legend=1 00:22:28.313 --rc geninfo_all_blocks=1 00:22:28.313 --rc geninfo_unexecuted_blocks=1 00:22:28.313 00:22:28.313 ' 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.313 --rc genhtml_branch_coverage=1 00:22:28.313 --rc genhtml_function_coverage=1 00:22:28.313 --rc genhtml_legend=1 00:22:28.313 --rc geninfo_all_blocks=1 00:22:28.313 --rc geninfo_unexecuted_blocks=1 00:22:28.313 00:22:28.313 ' 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.313 --rc genhtml_branch_coverage=1 00:22:28.313 --rc genhtml_function_coverage=1 00:22:28.313 --rc genhtml_legend=1 00:22:28.313 --rc geninfo_all_blocks=1 00:22:28.313 --rc geninfo_unexecuted_blocks=1 00:22:28.313 00:22:28.313 ' 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:28.313 07:19:10 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:28.313 07:19:10 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.313 07:19:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.314 07:19:10 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:28.314 07:19:10 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90257 00:22:28.314 07:19:10 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90257 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90257 ']' 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.314 07:19:10 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:28.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.314 07:19:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 [2024-11-20 07:19:10.640088] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:28.574 [2024-11-20 07:19:10.640311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90257 ] 00:22:28.574 [2024-11-20 07:19:10.815397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:28.834 [2024-11-20 07:19:10.971763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.834 [2024-11-20 07:19:10.971784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:22:29.771 07:19:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.771 07:19:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.771 07:19:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.771 07:19:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:29.771 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:29.771 ' 00:22:31.677 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:31.677 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:31.677 07:19:13 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:31.677 07:19:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.677 07:19:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:31.677 07:19:13 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:31.677 07:19:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.677 07:19:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:31.677 07:19:13 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:31.677 ' 00:22:32.619 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:32.907 07:19:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:32.907 07:19:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.907 07:19:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.907 07:19:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:32.907 07:19:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.907 07:19:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.907 07:19:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:32.907 07:19:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:33.476 07:19:15 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:33.476 07:19:15 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:33.476 07:19:15 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:33.476 07:19:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.476 07:19:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.476 07:19:15 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:33.476 07:19:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.476 07:19:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.476 07:19:15 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:33.476 ' 00:22:34.414 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:34.673 07:19:16 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:34.673 07:19:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.673 07:19:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.673 07:19:16 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:34.673 07:19:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.673 07:19:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.673 07:19:16 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:34.673 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:34.673 ' 00:22:36.053 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:36.053 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:36.053 07:19:18 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:36.053 07:19:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.053 07:19:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.313 07:19:18 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90257 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90257 ']' 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90257 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90257 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90257' 00:22:36.313 killing process with pid 90257 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90257 00:22:36.313 07:19:18 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90257 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90257 ']' 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90257 00:22:38.898 07:19:21 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90257 ']' 00:22:38.898 07:19:21 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90257 00:22:38.898 Process with pid 90257 is not found 00:22:38.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90257) - No such process 00:22:38.898 07:19:21 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90257 is not found' 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:38.898 07:19:21 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:38.898 00:22:38.898 real 0m10.783s 00:22:38.898 user 0m22.340s 00:22:38.898 sys 0m1.156s 00:22:38.898 07:19:21 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.898 07:19:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.898 ************************************ 00:22:38.898 END TEST spdkcli_raid 00:22:38.898 ************************************ 00:22:38.898 07:19:21 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:38.898 07:19:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.898 07:19:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.898 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:38.898 ************************************ 00:22:38.898 START TEST blockdev_raid5f 00:22:38.898 ************************************ 00:22:38.898 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:39.158 * Looking for test storage... 00:22:39.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.158 07:19:21 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:39.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.158 --rc genhtml_branch_coverage=1 00:22:39.158 --rc genhtml_function_coverage=1 00:22:39.158 --rc genhtml_legend=1 00:22:39.158 --rc geninfo_all_blocks=1 00:22:39.158 --rc geninfo_unexecuted_blocks=1 00:22:39.158 00:22:39.158 ' 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:39.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.158 --rc genhtml_branch_coverage=1 00:22:39.158 --rc genhtml_function_coverage=1 00:22:39.158 --rc genhtml_legend=1 00:22:39.158 --rc geninfo_all_blocks=1 00:22:39.158 --rc geninfo_unexecuted_blocks=1 00:22:39.158 00:22:39.158 ' 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:39.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.158 --rc genhtml_branch_coverage=1 00:22:39.158 --rc genhtml_function_coverage=1 00:22:39.158 --rc genhtml_legend=1 00:22:39.158 --rc geninfo_all_blocks=1 00:22:39.158 --rc geninfo_unexecuted_blocks=1 00:22:39.158 00:22:39.158 ' 00:22:39.158 07:19:21 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:39.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.158 --rc genhtml_branch_coverage=1 00:22:39.158 --rc genhtml_function_coverage=1 00:22:39.158 --rc genhtml_legend=1 00:22:39.158 --rc geninfo_all_blocks=1 00:22:39.158 --rc geninfo_unexecuted_blocks=1 00:22:39.158 00:22:39.158 ' 00:22:39.158 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:39.158 07:19:21 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:39.158 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:39.158 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:39.158 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90538 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:39.159 07:19:21 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90538 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90538 ']' 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.159 07:19:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:39.418 [2024-11-20 07:19:21.464814] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:39.418 [2024-11-20 07:19:21.465026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90538 ] 00:22:39.418 [2024-11-20 07:19:21.642278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.678 [2024-11-20 07:19:21.776292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:22:40.663 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:40.663 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:22:40.663 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.663 Malloc0 00:22:40.663 Malloc1 00:22:40.663 Malloc2 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.663 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.663 07:19:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.663 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:40.942 07:19:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.942 07:19:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f531867a-a2ac-478d-aa6d-5c1fcbc91224"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f531867a-a2ac-478d-aa6d-5c1fcbc91224",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f531867a-a2ac-478d-aa6d-5c1fcbc91224",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cfe4718e-b7c3-49cb-bd4f-3f03d43d0429",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "be360843-922b-478e-b57d-359aa990fac5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "936e855c-6d79-4080-9cee-94bce058f328",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:40.942 07:19:23 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90538 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90538 ']' 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90538 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90538 00:22:40.942 killing process with pid 90538 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.942 07:19:23 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.943 07:19:23 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90538' 00:22:40.943 07:19:23 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90538 00:22:40.943 07:19:23 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90538 00:22:44.239 07:19:26 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:44.239 07:19:26 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:44.239 07:19:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:44.239 07:19:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.239 07:19:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.239 ************************************ 00:22:44.239 START TEST bdev_hello_world 00:22:44.239 ************************************ 00:22:44.239 07:19:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:44.239 [2024-11-20 07:19:26.164327] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:44.239 [2024-11-20 07:19:26.164475] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90611 ] 00:22:44.239 [2024-11-20 07:19:26.341085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.239 [2024-11-20 07:19:26.464032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.808 [2024-11-20 07:19:27.042322] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:44.808 [2024-11-20 07:19:27.042391] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:44.808 [2024-11-20 07:19:27.042415] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:44.808 [2024-11-20 07:19:27.043000] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:44.808 [2024-11-20 07:19:27.043195] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:44.808 [2024-11-20 07:19:27.043214] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:44.808 [2024-11-20 07:19:27.043276] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:44.808 00:22:44.808 [2024-11-20 07:19:27.043300] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:46.719 00:22:46.719 real 0m2.513s 00:22:46.719 user 0m2.149s 00:22:46.719 sys 0m0.241s 00:22:46.719 07:19:28 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.719 07:19:28 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:46.719 ************************************ 00:22:46.719 END TEST bdev_hello_world 00:22:46.719 ************************************ 00:22:46.719 07:19:28 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:46.719 07:19:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.719 07:19:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.719 07:19:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:46.719 ************************************ 00:22:46.719 START TEST bdev_bounds 00:22:46.719 ************************************ 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90660 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:46.719 Process bdevio pid: 90660 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90660' 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90660 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90660 ']' 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.719 07:19:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:46.719 [2024-11-20 07:19:28.747392] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:46.719 [2024-11-20 07:19:28.747513] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90660 ] 00:22:46.719 [2024-11-20 07:19:28.923465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:46.979 [2024-11-20 07:19:29.051088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.979 [2024-11-20 07:19:29.051221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.979 [2024-11-20 07:19:29.051257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.546 07:19:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.546 07:19:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:47.546 07:19:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:47.546 I/O targets: 00:22:47.546 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:47.546 00:22:47.546 00:22:47.546 CUnit - A unit testing framework for C - Version 2.1-3 00:22:47.546 http://cunit.sourceforge.net/ 00:22:47.546 00:22:47.546 00:22:47.546 Suite: bdevio tests on: raid5f 00:22:47.546 Test: blockdev write read block ...passed 00:22:47.546 Test: blockdev write zeroes read block ...passed 00:22:47.806 Test: blockdev write zeroes read no split ...passed 00:22:47.806 Test: blockdev write zeroes read split ...passed 00:22:47.806 Test: blockdev write zeroes read split partial ...passed 00:22:47.806 Test: blockdev reset ...passed 00:22:47.806 Test: blockdev write read 8 blocks ...passed 00:22:47.806 Test: blockdev write read size > 128k ...passed 00:22:47.806 Test: blockdev write read invalid size ...passed 00:22:47.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:47.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:47.806 Test: blockdev write read max offset ...passed 00:22:47.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:47.806 Test: blockdev writev readv 8 blocks ...passed 00:22:47.806 Test: blockdev writev readv 30 x 1block ...passed 00:22:47.806 Test: blockdev writev readv block ...passed 00:22:47.806 Test: blockdev writev readv size > 128k ...passed 00:22:47.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:47.806 Test: blockdev comparev and writev ...passed 00:22:47.806 Test: blockdev nvme passthru rw ...passed 00:22:47.806 Test: blockdev nvme passthru vendor specific ...passed 00:22:47.806 Test: blockdev nvme admin passthru ...passed 00:22:47.806 Test: blockdev copy ...passed 00:22:47.806 00:22:47.806 Run Summary: Type Total Ran Passed Failed Inactive 00:22:47.806 suites 1 1 n/a 0 0 00:22:47.806 tests 23 23 23 0 0 00:22:47.806 asserts 130 130 130 0 n/a 00:22:47.806 00:22:47.806 Elapsed time = 0.656 seconds 00:22:47.806 0 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90660 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90660 ']' 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90660 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90660 00:22:48.067 killing process with pid 90660 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90660' 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90660 00:22:48.067 07:19:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90660 00:22:49.459 07:19:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:49.459 00:22:49.459 real 0m3.060s 00:22:49.459 user 0m7.755s 00:22:49.459 sys 0m0.403s 00:22:49.459 ************************************ 00:22:49.459 END TEST bdev_bounds 00:22:49.459 ************************************ 00:22:49.459 07:19:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.459 07:19:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:49.718 07:19:31 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:49.718 07:19:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:49.718 07:19:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.718 07:19:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:49.718 ************************************ 00:22:49.718 START TEST bdev_nbd 00:22:49.718 ************************************ 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:49.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90725 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90725 /var/tmp/spdk-nbd.sock 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90725 ']' 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.718 07:19:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:49.718 [2024-11-20 07:19:31.877668] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:49.718 [2024-11-20 07:19:31.877797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.976 [2024-11-20 07:19:32.053760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.976 [2024-11-20 07:19:32.176325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:50.907 07:19:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.907 1+0 records in 00:22:50.907 1+0 records out 00:22:50.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056283 s, 7.3 MB/s 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:50.907 07:19:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:50.908 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:50.908 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:50.908 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:51.165 { 00:22:51.165 "nbd_device": "/dev/nbd0", 00:22:51.165 "bdev_name": "raid5f" 00:22:51.165 } 00:22:51.165 ]' 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:51.165 { 00:22:51.165 "nbd_device": "/dev/nbd0", 00:22:51.165 "bdev_name": "raid5f" 00:22:51.165 } 00:22:51.165 ]' 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:51.165 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:51.423 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:51.681 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:51.938 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:51.938 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:51.938 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:51.939 07:19:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:51.939 /dev/nbd0 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.219 1+0 records in 00:22:52.219 1+0 records out 00:22:52.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330179 s, 12.4 MB/s 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.219 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:52.482 { 00:22:52.482 "nbd_device": "/dev/nbd0", 00:22:52.482 "bdev_name": "raid5f" 00:22:52.482 } 00:22:52.482 ]' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:52.482 { 00:22:52.482 "nbd_device": "/dev/nbd0", 00:22:52.482 "bdev_name": "raid5f" 00:22:52.482 } 00:22:52.482 ]' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:52.482 256+0 records in 00:22:52.482 256+0 records out 00:22:52.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134478 s, 78.0 MB/s 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:52.482 256+0 records in 00:22:52.482 256+0 records out 00:22:52.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379683 s, 27.6 MB/s 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.482 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.740 07:19:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.998 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:52.999 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:53.256 malloc_lvol_verify 00:22:53.256 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:53.514 f5f830af-0bbc-4cc6-8fc3-77b565e2e696 00:22:53.514 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:53.772 6ce1684b-e01d-4511-a147-b48b50a22925 00:22:53.772 07:19:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:54.030 /dev/nbd0 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:54.030 mke2fs 1.47.0 (5-Feb-2023) 00:22:54.030 Discarding device blocks: 0/4096 done 00:22:54.030 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:54.030 00:22:54.030 Allocating group tables: 0/1 done 00:22:54.030 Writing inode tables: 0/1 done 00:22:54.030 Creating journal (1024 blocks): done 00:22:54.030 Writing superblocks and filesystem accounting information: 0/1 done 00:22:54.030 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.030 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90725 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90725 ']' 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90725 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90725 00:22:54.289 killing process with pid 90725 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90725' 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90725 00:22:54.289 07:19:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90725 00:22:56.193 07:19:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:56.193 00:22:56.193 real 0m6.478s 00:22:56.193 user 0m8.969s 00:22:56.193 sys 0m1.314s 00:22:56.193 07:19:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.193 ************************************ 00:22:56.193 END TEST bdev_nbd 00:22:56.193 ************************************ 00:22:56.193 07:19:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:56.193 07:19:38 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:56.193 07:19:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:56.193 07:19:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:56.193 07:19:38 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:56.193 07:19:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:56.193 07:19:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.193 07:19:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:56.193 ************************************ 00:22:56.193 START TEST bdev_fio 00:22:56.193 ************************************ 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:56.193 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:56.193 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:56.194 ************************************ 00:22:56.194 START TEST bdev_fio_rw_verify 00:22:56.194 ************************************ 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:56.194 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:56.453 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:56.453 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:56.453 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:56.453 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:56.453 07:19:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:56.453 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:56.453 fio-3.35 00:22:56.453 Starting 1 thread 00:23:08.667 00:23:08.667 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90933: Wed Nov 20 07:19:49 2024 00:23:08.667 read: IOPS=8755, BW=34.2MiB/s (35.9MB/s)(342MiB/10001msec) 00:23:08.667 slat (nsec): min=19481, max=78559, avg=27169.31, stdev=3494.54 00:23:08.667 clat (usec): min=11, max=474, avg=181.30, stdev=65.81 00:23:08.667 lat (usec): min=35, max=510, avg=208.47, stdev=66.47 00:23:08.667 clat percentiles (usec): 00:23:08.667 | 50.000th=[ 180], 99.000th=[ 310], 99.900th=[ 379], 99.990th=[ 404], 00:23:08.667 | 99.999th=[ 474] 00:23:08.667 write: IOPS=9175, BW=35.8MiB/s (37.6MB/s)(354MiB/9878msec); 0 zone resets 00:23:08.667 slat (usec): min=9, max=157, avg=23.51, stdev= 5.42 00:23:08.667 clat (usec): min=71, max=4243, avg=417.52, stdev=71.52 00:23:08.667 lat (usec): min=91, max=4264, avg=441.03, stdev=73.02 00:23:08.667 clat percentiles (usec): 00:23:08.667 | 50.000th=[ 420], 99.000th=[ 594], 99.900th=[ 701], 99.990th=[ 1012], 00:23:08.667 | 99.999th=[ 4228] 00:23:08.667 bw ( KiB/s): min=34264, max=38696, per=99.87%, avg=36657.53, stdev=1373.60, samples=19 00:23:08.667 iops : min= 8566, max= 9674, avg=9164.37, stdev=343.39, samples=19 00:23:08.667 lat (usec) : 20=0.01%, 100=6.83%, 250=33.60%, 500=56.18%, 750=3.35% 00:23:08.667 lat (usec) : 1000=0.02% 00:23:08.667 lat (msec) : 2=0.01%, 10=0.01% 00:23:08.667 cpu : usr=98.79%, sys=0.46%, ctx=38, majf=0, minf=7563 00:23:08.667 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:08.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.667 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.667 issued rwts: total=87559,90640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:08.667 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:08.667 00:23:08.667 Run status group 0 (all jobs): 00:23:08.667 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=342MiB (359MB), run=10001-10001msec 00:23:08.667 WRITE: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=354MiB (371MB), run=9878-9878msec 00:23:09.605 ----------------------------------------------------- 00:23:09.605 Suppressions used: 00:23:09.605 count bytes template 00:23:09.605 1 7 /usr/src/fio/parse.c 00:23:09.605 374 35904 /usr/src/fio/iolog.c 00:23:09.605 1 8 libtcmalloc_minimal.so 00:23:09.605 1 904 libcrypto.so 00:23:09.605 ----------------------------------------------------- 00:23:09.605 00:23:09.605 ************************************ 00:23:09.605 END TEST bdev_fio_rw_verify 00:23:09.605 ************************************ 00:23:09.605 00:23:09.605 real 0m13.192s 00:23:09.605 user 0m13.302s 00:23:09.605 sys 0m0.603s 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:09.605 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f531867a-a2ac-478d-aa6d-5c1fcbc91224"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f531867a-a2ac-478d-aa6d-5c1fcbc91224",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f531867a-a2ac-478d-aa6d-5c1fcbc91224",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cfe4718e-b7c3-49cb-bd4f-3f03d43d0429",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "be360843-922b-478e-b57d-359aa990fac5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "936e855c-6d79-4080-9cee-94bce058f328",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:09.606 /home/vagrant/spdk_repo/spdk 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:09.606 00:23:09.606 real 0m13.433s 00:23:09.606 user 0m13.410s 00:23:09.606 sys 0m0.711s 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.606 ************************************ 00:23:09.606 END TEST bdev_fio 00:23:09.606 ************************************ 00:23:09.606 07:19:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:09.606 07:19:51 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:09.606 07:19:51 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:09.606 07:19:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:09.606 07:19:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.606 07:19:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:09.606 ************************************ 00:23:09.606 START TEST bdev_verify 00:23:09.606 ************************************ 00:23:09.606 07:19:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:09.865 [2024-11-20 07:19:51.921237] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:09.865 [2024-11-20 07:19:51.921391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91098 ] 00:23:09.865 [2024-11-20 07:19:52.102893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:10.124 [2024-11-20 07:19:52.241970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.124 [2024-11-20 07:19:52.241995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.692 Running I/O for 5 seconds... 00:23:13.067 9819.00 IOPS, 38.36 MiB/s [2024-11-20T07:19:55.899Z] 10846.00 IOPS, 42.37 MiB/s [2024-11-20T07:19:57.278Z] 11597.33 IOPS, 45.30 MiB/s [2024-11-20T07:19:58.219Z] 11975.00 IOPS, 46.78 MiB/s [2024-11-20T07:19:58.219Z] 12067.60 IOPS, 47.14 MiB/s 00:23:15.954 Latency(us) 00:23:15.954 [2024-11-20T07:19:58.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.954 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:15.954 Verification LBA range: start 0x0 length 0x2000 00:23:15.954 raid5f : 5.02 6082.07 23.76 0.00 0.00 31557.31 271.87 29534.13 00:23:15.954 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:15.954 Verification LBA range: start 0x2000 length 0x2000 00:23:15.954 raid5f : 5.02 5985.22 23.38 0.00 0.00 32059.28 160.08 54031.43 00:23:15.954 [2024-11-20T07:19:58.219Z] =================================================================================================================== 00:23:15.954 [2024-11-20T07:19:58.219Z] Total : 12067.29 47.14 0.00 0.00 31806.39 160.08 54031.43 00:23:17.350 ************************************ 00:23:17.350 END TEST bdev_verify 00:23:17.350 ************************************ 00:23:17.350 00:23:17.350 real 0m7.640s 00:23:17.350 user 0m14.075s 00:23:17.350 sys 0m0.280s 00:23:17.350 07:19:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.350 07:19:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:17.350 07:19:59 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:17.350 07:19:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:17.350 07:19:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.350 07:19:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:17.350 ************************************ 00:23:17.350 START TEST bdev_verify_big_io 00:23:17.350 ************************************ 00:23:17.350 07:19:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:17.609 [2024-11-20 07:19:59.618496] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:17.609 [2024-11-20 07:19:59.618630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91197 ] 00:23:17.609 [2024-11-20 07:19:59.798016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:17.869 [2024-11-20 07:19:59.933569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.869 [2024-11-20 07:19:59.933601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.438 Running I/O for 5 seconds... 00:23:20.311 506.00 IOPS, 31.62 MiB/s [2024-11-20T07:20:03.954Z] 633.00 IOPS, 39.56 MiB/s [2024-11-20T07:20:04.941Z] 654.33 IOPS, 40.90 MiB/s [2024-11-20T07:20:05.889Z] 634.50 IOPS, 39.66 MiB/s [2024-11-20T07:20:05.889Z] 660.00 IOPS, 41.25 MiB/s 00:23:23.624 Latency(us) 00:23:23.624 [2024-11-20T07:20:05.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.624 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:23.624 Verification LBA range: start 0x0 length 0x200 00:23:23.624 raid5f : 5.29 336.10 21.01 0.00 0.00 9313611.48 203.91 406609.38 00:23:23.624 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:23.624 Verification LBA range: start 0x200 length 0x200 00:23:23.624 raid5f : 5.28 336.59 21.04 0.00 0.00 9262329.81 316.59 401114.66 00:23:23.624 [2024-11-20T07:20:05.889Z] =================================================================================================================== 00:23:23.624 [2024-11-20T07:20:05.889Z] Total : 672.69 42.04 0.00 0.00 9287970.65 203.91 406609.38 00:23:25.527 00:23:25.527 real 0m7.921s 00:23:25.527 user 0m14.632s 00:23:25.527 sys 0m0.271s 00:23:25.527 ************************************ 00:23:25.527 END TEST bdev_verify_big_io 00:23:25.527 ************************************ 00:23:25.527 07:20:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.527 07:20:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 07:20:07 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:25.527 07:20:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:25.527 07:20:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.527 07:20:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 ************************************ 00:23:25.527 START TEST bdev_write_zeroes 00:23:25.527 ************************************ 00:23:25.527 07:20:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:25.527 [2024-11-20 07:20:07.599499] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:25.528 [2024-11-20 07:20:07.599721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91295 ] 00:23:25.528 [2024-11-20 07:20:07.777945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.785 [2024-11-20 07:20:07.917932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.353 Running I/O for 1 seconds... 00:23:27.289 22911.00 IOPS, 89.50 MiB/s 00:23:27.289 Latency(us) 00:23:27.289 [2024-11-20T07:20:09.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.289 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:27.289 raid5f : 1.01 22882.92 89.39 0.00 0.00 5574.29 1624.09 8757.21 00:23:27.289 [2024-11-20T07:20:09.554Z] =================================================================================================================== 00:23:27.289 [2024-11-20T07:20:09.554Z] Total : 22882.92 89.39 0.00 0.00 5574.29 1624.09 8757.21 00:23:29.239 00:23:29.239 real 0m3.527s 00:23:29.239 user 0m3.134s 00:23:29.239 sys 0m0.262s 00:23:29.239 ************************************ 00:23:29.239 END TEST bdev_write_zeroes 00:23:29.239 ************************************ 00:23:29.239 07:20:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.239 07:20:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:29.239 07:20:11 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.239 07:20:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:29.239 07:20:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.239 07:20:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.239 ************************************ 00:23:29.239 START TEST bdev_json_nonenclosed 00:23:29.239 ************************************ 00:23:29.239 07:20:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.239 [2024-11-20 07:20:11.190384] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:29.239 [2024-11-20 07:20:11.190632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91354 ] 00:23:29.239 [2024-11-20 07:20:11.375319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.498 [2024-11-20 07:20:11.503558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.498 [2024-11-20 07:20:11.503662] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:29.498 [2024-11-20 07:20:11.503693] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:29.498 [2024-11-20 07:20:11.503705] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:29.757 00:23:29.757 real 0m0.681s 00:23:29.757 user 0m0.444s 00:23:29.757 sys 0m0.131s 00:23:29.757 07:20:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.757 ************************************ 00:23:29.757 END TEST bdev_json_nonenclosed 00:23:29.757 ************************************ 00:23:29.757 07:20:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:29.757 07:20:11 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.757 07:20:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:29.757 07:20:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.757 07:20:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.757 ************************************ 00:23:29.757 START TEST bdev_json_nonarray 00:23:29.757 ************************************ 00:23:29.757 07:20:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.757 [2024-11-20 07:20:11.938420] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:29.757 [2024-11-20 07:20:11.938541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91385 ] 00:23:30.016 [2024-11-20 07:20:12.113357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.016 [2024-11-20 07:20:12.240616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.016 [2024-11-20 07:20:12.240848] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:30.016 [2024-11-20 07:20:12.240876] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:30.016 [2024-11-20 07:20:12.240900] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:30.274 00:23:30.274 real 0m0.673s 00:23:30.274 user 0m0.440s 00:23:30.274 sys 0m0.128s 00:23:30.274 07:20:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.274 ************************************ 00:23:30.274 END TEST bdev_json_nonarray 00:23:30.274 ************************************ 00:23:30.274 07:20:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:30.533 07:20:12 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:30.533 00:23:30.533 real 0m51.461s 00:23:30.533 user 1m10.078s 00:23:30.533 sys 0m4.820s 00:23:30.533 ************************************ 00:23:30.533 END TEST blockdev_raid5f 00:23:30.533 ************************************ 00:23:30.533 07:20:12 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.533 07:20:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:30.533 07:20:12 -- spdk/autotest.sh@194 -- # uname -s 00:23:30.533 07:20:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:30.533 07:20:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.533 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.533 07:20:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:30.533 07:20:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:30.533 07:20:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:30.533 07:20:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:30.533 07:20:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.533 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.533 07:20:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:30.533 07:20:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:30.533 07:20:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:30.533 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:32.460 INFO: APP EXITING 00:23:32.460 INFO: killing all VMs 00:23:32.460 INFO: killing vhost app 00:23:32.460 INFO: EXIT DONE 00:23:33.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:33.027 Waiting for block devices as requested 00:23:33.027 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.286 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:34.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:34.223 Cleaning 00:23:34.223 Removing: /var/run/dpdk/spdk0/config 00:23:34.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:34.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:34.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:34.223 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:34.223 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:34.223 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:34.223 Removing: /dev/shm/spdk_tgt_trace.pid56985 00:23:34.223 Removing: /var/run/dpdk/spdk0 00:23:34.223 Removing: /var/run/dpdk/spdk_pid56734 00:23:34.223 Removing: /var/run/dpdk/spdk_pid56985 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57215 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57319 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57375 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57514 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57532 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57748 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57866 00:23:34.223 Removing: /var/run/dpdk/spdk_pid57984 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58121 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58236 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58281 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58323 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58399 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58526 00:23:34.223 Removing: /var/run/dpdk/spdk_pid58993 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59068 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59152 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59172 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59343 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59368 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59534 00:23:34.223 Removing: /var/run/dpdk/spdk_pid59555 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59625 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59652 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59721 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59739 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59951 00:23:34.224 Removing: /var/run/dpdk/spdk_pid59982 00:23:34.224 Removing: /var/run/dpdk/spdk_pid60071 00:23:34.224 Removing: /var/run/dpdk/spdk_pid61458 00:23:34.224 Removing: /var/run/dpdk/spdk_pid61670 00:23:34.224 Removing: /var/run/dpdk/spdk_pid61821 00:23:34.224 Removing: /var/run/dpdk/spdk_pid62482 00:23:34.224 Removing: /var/run/dpdk/spdk_pid62698 00:23:34.224 Removing: /var/run/dpdk/spdk_pid62839 00:23:34.224 Removing: /var/run/dpdk/spdk_pid63501 00:23:34.224 Removing: /var/run/dpdk/spdk_pid63831 00:23:34.224 Removing: /var/run/dpdk/spdk_pid63971 00:23:34.224 Removing: /var/run/dpdk/spdk_pid65362 00:23:34.224 Removing: /var/run/dpdk/spdk_pid65625 00:23:34.224 Removing: /var/run/dpdk/spdk_pid65766 00:23:34.224 Removing: /var/run/dpdk/spdk_pid67158 00:23:34.224 Removing: /var/run/dpdk/spdk_pid67422 00:23:34.224 Removing: /var/run/dpdk/spdk_pid67562 00:23:34.224 Removing: /var/run/dpdk/spdk_pid68953 00:23:34.224 Removing: /var/run/dpdk/spdk_pid69399 00:23:34.224 Removing: /var/run/dpdk/spdk_pid69550 00:23:34.224 Removing: /var/run/dpdk/spdk_pid71049 00:23:34.484 Removing: /var/run/dpdk/spdk_pid71308 00:23:34.484 Removing: /var/run/dpdk/spdk_pid71458 00:23:34.484 Removing: /var/run/dpdk/spdk_pid72956 00:23:34.484 Removing: /var/run/dpdk/spdk_pid73221 00:23:34.484 Removing: /var/run/dpdk/spdk_pid73372 00:23:34.484 Removing: /var/run/dpdk/spdk_pid74884 00:23:34.484 Removing: /var/run/dpdk/spdk_pid75374 00:23:34.484 Removing: /var/run/dpdk/spdk_pid75526 00:23:34.484 Removing: /var/run/dpdk/spdk_pid75670 00:23:34.484 Removing: /var/run/dpdk/spdk_pid76105 00:23:34.484 Removing: /var/run/dpdk/spdk_pid76855 00:23:34.484 Removing: /var/run/dpdk/spdk_pid77258 00:23:34.484 Removing: /var/run/dpdk/spdk_pid77979 00:23:34.484 Removing: /var/run/dpdk/spdk_pid78431 00:23:34.484 Removing: /var/run/dpdk/spdk_pid79214 00:23:34.484 Removing: /var/run/dpdk/spdk_pid79629 00:23:34.484 Removing: /var/run/dpdk/spdk_pid81605 00:23:34.484 Removing: /var/run/dpdk/spdk_pid82050 00:23:34.484 Removing: /var/run/dpdk/spdk_pid82509 00:23:34.484 Removing: /var/run/dpdk/spdk_pid84618 00:23:34.484 Removing: /var/run/dpdk/spdk_pid85109 00:23:34.484 Removing: /var/run/dpdk/spdk_pid85641 00:23:34.484 Removing: /var/run/dpdk/spdk_pid86700 00:23:34.484 Removing: /var/run/dpdk/spdk_pid87028 00:23:34.484 Removing: /var/run/dpdk/spdk_pid87974 00:23:34.484 Removing: /var/run/dpdk/spdk_pid88301 00:23:34.484 Removing: /var/run/dpdk/spdk_pid89251 00:23:34.484 Removing: /var/run/dpdk/spdk_pid89575 00:23:34.484 Removing: /var/run/dpdk/spdk_pid90257 00:23:34.484 Removing: /var/run/dpdk/spdk_pid90538 00:23:34.484 Removing: /var/run/dpdk/spdk_pid90611 00:23:34.484 Removing: /var/run/dpdk/spdk_pid90660 00:23:34.484 Removing: /var/run/dpdk/spdk_pid90918 00:23:34.484 Removing: /var/run/dpdk/spdk_pid91098 00:23:34.484 Removing: /var/run/dpdk/spdk_pid91197 00:23:34.484 Removing: /var/run/dpdk/spdk_pid91295 00:23:34.484 Removing: /var/run/dpdk/spdk_pid91354 00:23:34.484 Removing: /var/run/dpdk/spdk_pid91385 00:23:34.484 Clean 00:23:34.484 07:20:16 -- common/autotest_common.sh@1453 -- # return 0 00:23:34.484 07:20:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:34.484 07:20:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.484 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.484 07:20:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:34.484 07:20:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.484 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.743 07:20:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:34.743 07:20:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:34.743 07:20:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:34.743 07:20:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:34.743 07:20:16 -- spdk/autotest.sh@398 -- # hostname 00:23:34.743 07:20:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:34.743 geninfo: WARNING: invalid characters removed from testname! 00:24:01.337 07:20:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:04.624 07:20:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:07.908 07:20:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:09.812 07:20:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:12.353 07:20:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:14.260 07:20:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:16.797 07:20:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:16.797 07:20:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:16.797 07:20:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:16.797 07:20:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:16.797 07:20:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:16.797 07:20:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:16.797 + [[ -n 5430 ]] 00:24:16.797 + sudo kill 5430 00:24:16.807 [Pipeline] } 00:24:16.822 [Pipeline] // timeout 00:24:16.827 [Pipeline] } 00:24:16.844 [Pipeline] // stage 00:24:16.849 [Pipeline] } 00:24:16.864 [Pipeline] // catchError 00:24:16.874 [Pipeline] stage 00:24:16.877 [Pipeline] { (Stop VM) 00:24:16.891 [Pipeline] sh 00:24:17.173 + vagrant halt 00:24:20.488 ==> default: Halting domain... 00:24:28.661 [Pipeline] sh 00:24:28.944 + vagrant destroy -f 00:24:31.607 ==> default: Removing domain... 00:24:31.876 [Pipeline] sh 00:24:32.157 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:24:32.165 [Pipeline] } 00:24:32.182 [Pipeline] // stage 00:24:32.190 [Pipeline] } 00:24:32.204 [Pipeline] // dir 00:24:32.211 [Pipeline] } 00:24:32.226 [Pipeline] // wrap 00:24:32.232 [Pipeline] } 00:24:32.245 [Pipeline] // catchError 00:24:32.256 [Pipeline] stage 00:24:32.258 [Pipeline] { (Epilogue) 00:24:32.272 [Pipeline] sh 00:24:32.553 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:39.129 [Pipeline] catchError 00:24:39.131 [Pipeline] { 00:24:39.143 [Pipeline] sh 00:24:39.426 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:39.426 Artifacts sizes are good 00:24:39.436 [Pipeline] } 00:24:39.452 [Pipeline] // catchError 00:24:39.463 [Pipeline] archiveArtifacts 00:24:39.471 Archiving artifacts 00:24:39.582 [Pipeline] cleanWs 00:24:39.593 [WS-CLEANUP] Deleting project workspace... 00:24:39.593 [WS-CLEANUP] Deferred wipeout is used... 00:24:39.604 [WS-CLEANUP] done 00:24:39.606 [Pipeline] } 00:24:39.625 [Pipeline] // stage 00:24:39.630 [Pipeline] } 00:24:39.648 [Pipeline] // node 00:24:39.653 [Pipeline] End of Pipeline 00:24:39.687 Finished: SUCCESS